httpd hogging all memory until server crash











up vote
0
down vote

favorite












The server in question is used for processing data into reports, it has three cron jobs which execute every minute to check the process queue and if anything is found they'll keep running until the queue is empty. On particularly busy days (every Tuesday for the last 3 months and other odd days) the server crashes.



By running ps aux | grep 'httpd' | awk '{print $6/1024 " MB";}' I can see the three httpd workers are consistently gaining memory in a linear fashion. Since there are tons of reports the processes don't end and they continue to absorb more memory.



This is a PHP runtime, checking memory usage inside there we can see it doesn't get anywhere near being an issue. There's a hard PHP memory limit of 256MB and in the PHP layer at least the memory is released after a report is finished processing. Currently using a PHP memory profiler on a duplicate server to see if there are any issues in the PHP layer causing this. However the ps aux command mentioned above shows memory starting at about 270MB for a httpd worker and rising and rising until the whole server (8GB RAM) runs out. So it seems likely to me that httpd is holding onto all memory the PHP layer is using and not recycling it back into the PHP process or into the system. Snippet from the error log when this occurs below.



[Tue Dec 04 09:20:34.805175 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:20:45.142735 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process

mmap() failed: [
mmap() failed: [
mmap() failed: [1122]] CCannot allocate memoryannot allocate memory12] Cannot allocate memory


mmap() failed: [
mmap() failed: [1122]] CCannot allocate memoryannot allocate memory

mmap() failed: [12] Cannot allocate memory[Tue Dec 04 09:20:45.764357 2018] [php7:error] [pid 12198] [client 127.0.0.1:37694] PHP Fatal error:
Out of memory (allocated 2097152) (tried to allocate 65536 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:45.764411 2018] [php7:error] [pid 12197] [client 127.0.0.1:37692] PHP Fatal error: Out of memory (allocated 2097152) (tried
to allocate 65536 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:45.764358 2018] [php7:error] [pid 12191] [client 127.0.0.1:37690] PHP Fatal error: Out of memory (allocated 2097152) (tried
to allocate 65536 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440



mmap() failed: [
mmap() failed: [1122]] CCannot allocate memoryannot allocate memory

mmap() failed: [12] Cannot allocate memory


mmap() failed: [
mmap() failed: [1122]] CCannot allocate memoryannot allocate memory

mmap() failed: [12] Cannot allocate memory[Tue Dec 04 09:20:46.257438 2018] [php7:error] [pid 12191] [client 127.0.0.1:37690] PHP Fatal error:
Out of memory (allocated 2097152) (tried to allocate 73728 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:46.257439 2018] [php7:error] [pid 12197] [client 127.0.0.1:37692] PHP Fatal error: Out of memory (allocated 2097152) (tried
to allocate 73728 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:46.257481 2018] [php7:error] [pid 12198] [client 127.0.0.1:37694] PHP Fatal error: Out of memory (allocated 2097152) (tried
to allocate 73728 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:57.249945 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:23:08.171314 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:24:33.415351 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:26:22.308600 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:27:55.057324 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:29:17.174173 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:30:39.193341 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:33:16.023329 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:34:53.208958 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:35:30.902310 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:36:05.215192 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:37:36.511811 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:42:18.453045 2018] [suexec:notice] [pid 2699] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)


httpd mods in use:



$ sudo httpd -M
Loaded Modules:
core_module (static)
so_module (static)
http_module (static)
access_compat_module (shared)
actions_module (shared)
alias_module (shared)
allowmethods_module (shared)
auth_basic_module (shared)
auth_digest_module (shared)
authn_anon_module (shared)
authn_core_module (shared)
authn_dbd_module (shared)
authn_dbm_module (shared)
authn_file_module (shared)
authn_socache_module (shared)
authz_core_module (shared)
authz_dbd_module (shared)
authz_dbm_module (shared)
authz_groupfile_module (shared)
authz_host_module (shared)
authz_owner_module (shared)
authz_user_module (shared)
autoindex_module (shared)
cache_module (shared)
cache_disk_module (shared)
cache_socache_module (shared)
data_module (shared)
dbd_module (shared)
deflate_module (shared)
dir_module (shared)
dumpio_module (shared)
echo_module (shared)
env_module (shared)
expires_module (shared)
ext_filter_module (shared)
filter_module (shared)
headers_module (shared)
http2_module (shared)
include_module (shared)
info_module (shared)
log_config_module (shared)
logio_module (shared)
macro_module (shared)
mime_magic_module (shared)
mime_module (shared)
negotiation_module (shared)
remoteip_module (shared)
reqtimeout_module (shared)
request_module (shared)
rewrite_module (shared)
setenvif_module (shared)
slotmem_plain_module (shared)
slotmem_shm_module (shared)
socache_dbm_module (shared)
socache_memcache_module (shared)
socache_shmcb_module (shared)
status_module (shared)
substitute_module (shared)
suexec_module (shared)
unixd_module (shared)
userdir_module (shared)
version_module (shared)
vhost_alias_module (shared)
watchdog_module (shared)
dav_module (shared)
dav_fs_module (shared)
dav_lock_module (shared)
lua_module (shared)
mpm_prefork_module (shared)
proxy_module (shared)
lbmethod_bybusyness_module (shared)
lbmethod_byrequests_module (shared)
lbmethod_bytraffic_module (shared)
lbmethod_heartbeat_module (shared)
proxy_ajp_module (shared)
proxy_balancer_module (shared)
proxy_connect_module (shared)
proxy_express_module (shared)
proxy_fcgi_module (shared)
proxy_fdpass_module (shared)
proxy_ftp_module (shared)
proxy_http_module (shared)
proxy_hcheck_module (shared)
proxy_scgi_module (shared)
proxy_uwsgi_module (shared)
proxy_wstunnel_module (shared)
ssl_module (shared)
php7_module (shared)
$ httpd -l
Compiled in modules:
core.c
mod_so.c
http_core.c


httpd/http.d/http.conf



<IfModule mpm_prefork_module>
StartServers 2
MinSpareServers 2
MaxSpareServers 4
MaxRequestWorkers 10
MaxConnectionsPerChild 1
</IfModule>









share|improve this question






















  • Might be the issue is in your php code running through worker. As when you hit php through http request then after completion of execution memory is freed automatically. But in case of worker it is continuously running, means if you are assigning any variable then its not going to free automatically. so its better to set values by null after use and also can unset array or index accordingly.
    – satya prakash patel
    Dec 4 at 13:07












  • "after completion of execution". The php isn't completed from start to finish (or crash) for about 20 hours (or less with crash). It keeps going through a loop processing all of the reports in the queue until it is done. But it doesn't remember anything from the previous loop, I've checked the memory on the PHP layer and it's fine. However apache seems to be holding onto that memory even after PHP lets it go. Setting values to null also wouldn't help. Garbage collection is working fine in the PHP layer, it's apache holding onto the memory which is the issue.
    – Shard
    Dec 4 at 14:13















up vote
0
down vote

favorite












The server in question is used for processing data into reports, it has three cron jobs which execute every minute to check the process queue and if anything is found they'll keep running until the queue is empty. On particularly busy days (every Tuesday for the last 3 months and other odd days) the server crashes.



By running ps aux | grep 'httpd' | awk '{print $6/1024 " MB";}' I can see the three httpd workers are consistently gaining memory in a linear fashion. Since there are tons of reports the processes don't end and they continue to absorb more memory.



This is a PHP runtime, checking memory usage inside there we can see it doesn't get anywhere near being an issue. There's a hard PHP memory limit of 256MB and in the PHP layer at least the memory is released after a report is finished processing. Currently using a PHP memory profiler on a duplicate server to see if there are any issues in the PHP layer causing this. However the ps aux command mentioned above shows memory starting at about 270MB for a httpd worker and rising and rising until the whole server (8GB RAM) runs out. So it seems likely to me that httpd is holding onto all memory the PHP layer is using and not recycling it back into the PHP process or into the system. Snippet from the error log when this occurs below.



[Tue Dec 04 09:20:34.805175 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:20:45.142735 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process

mmap() failed: [
mmap() failed: [
mmap() failed: [1122]] CCannot allocate memoryannot allocate memory12] Cannot allocate memory


mmap() failed: [
mmap() failed: [1122]] CCannot allocate memoryannot allocate memory

mmap() failed: [12] Cannot allocate memory[Tue Dec 04 09:20:45.764357 2018] [php7:error] [pid 12198] [client 127.0.0.1:37694] PHP Fatal error:
Out of memory (allocated 2097152) (tried to allocate 65536 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:45.764411 2018] [php7:error] [pid 12197] [client 127.0.0.1:37692] PHP Fatal error: Out of memory (allocated 2097152) (tried
to allocate 65536 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:45.764358 2018] [php7:error] [pid 12191] [client 127.0.0.1:37690] PHP Fatal error: Out of memory (allocated 2097152) (tried
to allocate 65536 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440



mmap() failed: [
mmap() failed: [1122]] CCannot allocate memoryannot allocate memory

mmap() failed: [12] Cannot allocate memory


mmap() failed: [
mmap() failed: [1122]] CCannot allocate memoryannot allocate memory

mmap() failed: [12] Cannot allocate memory[Tue Dec 04 09:20:46.257438 2018] [php7:error] [pid 12191] [client 127.0.0.1:37690] PHP Fatal error:
Out of memory (allocated 2097152) (tried to allocate 73728 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:46.257439 2018] [php7:error] [pid 12197] [client 127.0.0.1:37692] PHP Fatal error: Out of memory (allocated 2097152) (tried
to allocate 73728 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:46.257481 2018] [php7:error] [pid 12198] [client 127.0.0.1:37694] PHP Fatal error: Out of memory (allocated 2097152) (tried
to allocate 73728 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:57.249945 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:23:08.171314 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:24:33.415351 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:26:22.308600 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:27:55.057324 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:29:17.174173 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:30:39.193341 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:33:16.023329 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:34:53.208958 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:35:30.902310 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:36:05.215192 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:37:36.511811 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:42:18.453045 2018] [suexec:notice] [pid 2699] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)


httpd mods in use:



$ sudo httpd -M
Loaded Modules:
core_module (static)
so_module (static)
http_module (static)
access_compat_module (shared)
actions_module (shared)
alias_module (shared)
allowmethods_module (shared)
auth_basic_module (shared)
auth_digest_module (shared)
authn_anon_module (shared)
authn_core_module (shared)
authn_dbd_module (shared)
authn_dbm_module (shared)
authn_file_module (shared)
authn_socache_module (shared)
authz_core_module (shared)
authz_dbd_module (shared)
authz_dbm_module (shared)
authz_groupfile_module (shared)
authz_host_module (shared)
authz_owner_module (shared)
authz_user_module (shared)
autoindex_module (shared)
cache_module (shared)
cache_disk_module (shared)
cache_socache_module (shared)
data_module (shared)
dbd_module (shared)
deflate_module (shared)
dir_module (shared)
dumpio_module (shared)
echo_module (shared)
env_module (shared)
expires_module (shared)
ext_filter_module (shared)
filter_module (shared)
headers_module (shared)
http2_module (shared)
include_module (shared)
info_module (shared)
log_config_module (shared)
logio_module (shared)
macro_module (shared)
mime_magic_module (shared)
mime_module (shared)
negotiation_module (shared)
remoteip_module (shared)
reqtimeout_module (shared)
request_module (shared)
rewrite_module (shared)
setenvif_module (shared)
slotmem_plain_module (shared)
slotmem_shm_module (shared)
socache_dbm_module (shared)
socache_memcache_module (shared)
socache_shmcb_module (shared)
status_module (shared)
substitute_module (shared)
suexec_module (shared)
unixd_module (shared)
userdir_module (shared)
version_module (shared)
vhost_alias_module (shared)
watchdog_module (shared)
dav_module (shared)
dav_fs_module (shared)
dav_lock_module (shared)
lua_module (shared)
mpm_prefork_module (shared)
proxy_module (shared)
lbmethod_bybusyness_module (shared)
lbmethod_byrequests_module (shared)
lbmethod_bytraffic_module (shared)
lbmethod_heartbeat_module (shared)
proxy_ajp_module (shared)
proxy_balancer_module (shared)
proxy_connect_module (shared)
proxy_express_module (shared)
proxy_fcgi_module (shared)
proxy_fdpass_module (shared)
proxy_ftp_module (shared)
proxy_http_module (shared)
proxy_hcheck_module (shared)
proxy_scgi_module (shared)
proxy_uwsgi_module (shared)
proxy_wstunnel_module (shared)
ssl_module (shared)
php7_module (shared)
$ httpd -l
Compiled in modules:
core.c
mod_so.c
http_core.c


httpd/http.d/http.conf



<IfModule mpm_prefork_module>
StartServers 2
MinSpareServers 2
MaxSpareServers 4
MaxRequestWorkers 10
MaxConnectionsPerChild 1
</IfModule>









share|improve this question






















  • Might be the issue is in your php code running through worker. As when you hit php through http request then after completion of execution memory is freed automatically. But in case of worker it is continuously running, means if you are assigning any variable then its not going to free automatically. so its better to set values by null after use and also can unset array or index accordingly.
    – satya prakash patel
    Dec 4 at 13:07












  • "after completion of execution". The php isn't completed from start to finish (or crash) for about 20 hours (or less with crash). It keeps going through a loop processing all of the reports in the queue until it is done. But it doesn't remember anything from the previous loop, I've checked the memory on the PHP layer and it's fine. However apache seems to be holding onto that memory even after PHP lets it go. Setting values to null also wouldn't help. Garbage collection is working fine in the PHP layer, it's apache holding onto the memory which is the issue.
    – Shard
    Dec 4 at 14:13













up vote
0
down vote

favorite









up vote
0
down vote

favorite











The server in question is used for processing data into reports, it has three cron jobs which execute every minute to check the process queue and if anything is found they'll keep running until the queue is empty. On particularly busy days (every Tuesday for the last 3 months and other odd days) the server crashes.



By running ps aux | grep 'httpd' | awk '{print $6/1024 " MB";}' I can see the three httpd workers are consistently gaining memory in a linear fashion. Since there are tons of reports the processes don't end and they continue to absorb more memory.



This is a PHP runtime, checking memory usage inside there we can see it doesn't get anywhere near being an issue. There's a hard PHP memory limit of 256MB and in the PHP layer at least the memory is released after a report is finished processing. Currently using a PHP memory profiler on a duplicate server to see if there are any issues in the PHP layer causing this. However the ps aux command mentioned above shows memory starting at about 270MB for a httpd worker and rising and rising until the whole server (8GB RAM) runs out. So it seems likely to me that httpd is holding onto all memory the PHP layer is using and not recycling it back into the PHP process or into the system. Snippet from the error log when this occurs below.



[Tue Dec 04 09:20:34.805175 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:20:45.142735 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process

mmap() failed: [
mmap() failed: [
mmap() failed: [1122]] CCannot allocate memoryannot allocate memory12] Cannot allocate memory


mmap() failed: [
mmap() failed: [1122]] CCannot allocate memoryannot allocate memory

mmap() failed: [12] Cannot allocate memory[Tue Dec 04 09:20:45.764357 2018] [php7:error] [pid 12198] [client 127.0.0.1:37694] PHP Fatal error:
Out of memory (allocated 2097152) (tried to allocate 65536 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:45.764411 2018] [php7:error] [pid 12197] [client 127.0.0.1:37692] PHP Fatal error: Out of memory (allocated 2097152) (tried
to allocate 65536 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:45.764358 2018] [php7:error] [pid 12191] [client 127.0.0.1:37690] PHP Fatal error: Out of memory (allocated 2097152) (tried
to allocate 65536 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440



mmap() failed: [
mmap() failed: [1122]] CCannot allocate memoryannot allocate memory

mmap() failed: [12] Cannot allocate memory


mmap() failed: [
mmap() failed: [1122]] CCannot allocate memoryannot allocate memory

mmap() failed: [12] Cannot allocate memory[Tue Dec 04 09:20:46.257438 2018] [php7:error] [pid 12191] [client 127.0.0.1:37690] PHP Fatal error:
Out of memory (allocated 2097152) (tried to allocate 73728 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:46.257439 2018] [php7:error] [pid 12197] [client 127.0.0.1:37692] PHP Fatal error: Out of memory (allocated 2097152) (tried
to allocate 73728 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:46.257481 2018] [php7:error] [pid 12198] [client 127.0.0.1:37694] PHP Fatal error: Out of memory (allocated 2097152) (tried
to allocate 73728 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:57.249945 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:23:08.171314 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:24:33.415351 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:26:22.308600 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:27:55.057324 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:29:17.174173 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:30:39.193341 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:33:16.023329 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:34:53.208958 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:35:30.902310 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:36:05.215192 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:37:36.511811 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:42:18.453045 2018] [suexec:notice] [pid 2699] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)


httpd mods in use:



$ sudo httpd -M
Loaded Modules:
core_module (static)
so_module (static)
http_module (static)
access_compat_module (shared)
actions_module (shared)
alias_module (shared)
allowmethods_module (shared)
auth_basic_module (shared)
auth_digest_module (shared)
authn_anon_module (shared)
authn_core_module (shared)
authn_dbd_module (shared)
authn_dbm_module (shared)
authn_file_module (shared)
authn_socache_module (shared)
authz_core_module (shared)
authz_dbd_module (shared)
authz_dbm_module (shared)
authz_groupfile_module (shared)
authz_host_module (shared)
authz_owner_module (shared)
authz_user_module (shared)
autoindex_module (shared)
cache_module (shared)
cache_disk_module (shared)
cache_socache_module (shared)
data_module (shared)
dbd_module (shared)
deflate_module (shared)
dir_module (shared)
dumpio_module (shared)
echo_module (shared)
env_module (shared)
expires_module (shared)
ext_filter_module (shared)
filter_module (shared)
headers_module (shared)
http2_module (shared)
include_module (shared)
info_module (shared)
log_config_module (shared)
logio_module (shared)
macro_module (shared)
mime_magic_module (shared)
mime_module (shared)
negotiation_module (shared)
remoteip_module (shared)
reqtimeout_module (shared)
request_module (shared)
rewrite_module (shared)
setenvif_module (shared)
slotmem_plain_module (shared)
slotmem_shm_module (shared)
socache_dbm_module (shared)
socache_memcache_module (shared)
socache_shmcb_module (shared)
status_module (shared)
substitute_module (shared)
suexec_module (shared)
unixd_module (shared)
userdir_module (shared)
version_module (shared)
vhost_alias_module (shared)
watchdog_module (shared)
dav_module (shared)
dav_fs_module (shared)
dav_lock_module (shared)
lua_module (shared)
mpm_prefork_module (shared)
proxy_module (shared)
lbmethod_bybusyness_module (shared)
lbmethod_byrequests_module (shared)
lbmethod_bytraffic_module (shared)
lbmethod_heartbeat_module (shared)
proxy_ajp_module (shared)
proxy_balancer_module (shared)
proxy_connect_module (shared)
proxy_express_module (shared)
proxy_fcgi_module (shared)
proxy_fdpass_module (shared)
proxy_ftp_module (shared)
proxy_http_module (shared)
proxy_hcheck_module (shared)
proxy_scgi_module (shared)
proxy_uwsgi_module (shared)
proxy_wstunnel_module (shared)
ssl_module (shared)
php7_module (shared)
$ httpd -l
Compiled in modules:
core.c
mod_so.c
http_core.c


httpd/http.d/http.conf



<IfModule mpm_prefork_module>
StartServers 2
MinSpareServers 2
MaxSpareServers 4
MaxRequestWorkers 10
MaxConnectionsPerChild 1
</IfModule>









share|improve this question













The server in question is used for processing data into reports, it has three cron jobs which execute every minute to check the process queue and if anything is found they'll keep running until the queue is empty. On particularly busy days (every Tuesday for the last 3 months and other odd days) the server crashes.



By running ps aux | grep 'httpd' | awk '{print $6/1024 " MB";}' I can see the three httpd workers are consistently gaining memory in a linear fashion. Since there are tons of reports the processes don't end and they continue to absorb more memory.



This is a PHP runtime, checking memory usage inside there we can see it doesn't get anywhere near being an issue. There's a hard PHP memory limit of 256MB and in the PHP layer at least the memory is released after a report is finished processing. Currently using a PHP memory profiler on a duplicate server to see if there are any issues in the PHP layer causing this. However the ps aux command mentioned above shows memory starting at about 270MB for a httpd worker and rising and rising until the whole server (8GB RAM) runs out. So it seems likely to me that httpd is holding onto all memory the PHP layer is using and not recycling it back into the PHP process or into the system. Snippet from the error log when this occurs below.



[Tue Dec 04 09:20:34.805175 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:20:45.142735 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process

mmap() failed: [
mmap() failed: [
mmap() failed: [1122]] CCannot allocate memoryannot allocate memory12] Cannot allocate memory


mmap() failed: [
mmap() failed: [1122]] CCannot allocate memoryannot allocate memory

mmap() failed: [12] Cannot allocate memory[Tue Dec 04 09:20:45.764357 2018] [php7:error] [pid 12198] [client 127.0.0.1:37694] PHP Fatal error:
Out of memory (allocated 2097152) (tried to allocate 65536 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:45.764411 2018] [php7:error] [pid 12197] [client 127.0.0.1:37692] PHP Fatal error: Out of memory (allocated 2097152) (tried
to allocate 65536 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:45.764358 2018] [php7:error] [pid 12191] [client 127.0.0.1:37690] PHP Fatal error: Out of memory (allocated 2097152) (tried
to allocate 65536 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440



mmap() failed: [
mmap() failed: [1122]] CCannot allocate memoryannot allocate memory

mmap() failed: [12] Cannot allocate memory


mmap() failed: [
mmap() failed: [1122]] CCannot allocate memoryannot allocate memory

mmap() failed: [12] Cannot allocate memory[Tue Dec 04 09:20:46.257438 2018] [php7:error] [pid 12191] [client 127.0.0.1:37690] PHP Fatal error:
Out of memory (allocated 2097152) (tried to allocate 73728 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:46.257439 2018] [php7:error] [pid 12197] [client 127.0.0.1:37692] PHP Fatal error: Out of memory (allocated 2097152) (tried
to allocate 73728 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:46.257481 2018] [php7:error] [pid 12198] [client 127.0.0.1:37694] PHP Fatal error: Out of memory (allocated 2097152) (tried
to allocate 73728 bytes) in /var/www/html/vendor/composer/ClassLoader.php on line 440
[Tue Dec 04 09:20:57.249945 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:23:08.171314 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:24:33.415351 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:26:22.308600 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:27:55.057324 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:29:17.174173 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:30:39.193341 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:33:16.023329 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:34:53.208958 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:35:30.902310 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:36:05.215192 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:37:36.511811 2018] [mpm_prefork:error] [pid 2751] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
[Tue Dec 04 09:42:18.453045 2018] [suexec:notice] [pid 2699] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)


httpd mods in use:



$ sudo httpd -M
Loaded Modules:
core_module (static)
so_module (static)
http_module (static)
access_compat_module (shared)
actions_module (shared)
alias_module (shared)
allowmethods_module (shared)
auth_basic_module (shared)
auth_digest_module (shared)
authn_anon_module (shared)
authn_core_module (shared)
authn_dbd_module (shared)
authn_dbm_module (shared)
authn_file_module (shared)
authn_socache_module (shared)
authz_core_module (shared)
authz_dbd_module (shared)
authz_dbm_module (shared)
authz_groupfile_module (shared)
authz_host_module (shared)
authz_owner_module (shared)
authz_user_module (shared)
autoindex_module (shared)
cache_module (shared)
cache_disk_module (shared)
cache_socache_module (shared)
data_module (shared)
dbd_module (shared)
deflate_module (shared)
dir_module (shared)
dumpio_module (shared)
echo_module (shared)
env_module (shared)
expires_module (shared)
ext_filter_module (shared)
filter_module (shared)
headers_module (shared)
http2_module (shared)
include_module (shared)
info_module (shared)
log_config_module (shared)
logio_module (shared)
macro_module (shared)
mime_magic_module (shared)
mime_module (shared)
negotiation_module (shared)
remoteip_module (shared)
reqtimeout_module (shared)
request_module (shared)
rewrite_module (shared)
setenvif_module (shared)
slotmem_plain_module (shared)
slotmem_shm_module (shared)
socache_dbm_module (shared)
socache_memcache_module (shared)
socache_shmcb_module (shared)
status_module (shared)
substitute_module (shared)
suexec_module (shared)
unixd_module (shared)
userdir_module (shared)
version_module (shared)
vhost_alias_module (shared)
watchdog_module (shared)
dav_module (shared)
dav_fs_module (shared)
dav_lock_module (shared)
lua_module (shared)
mpm_prefork_module (shared)
proxy_module (shared)
lbmethod_bybusyness_module (shared)
lbmethod_byrequests_module (shared)
lbmethod_bytraffic_module (shared)
lbmethod_heartbeat_module (shared)
proxy_ajp_module (shared)
proxy_balancer_module (shared)
proxy_connect_module (shared)
proxy_express_module (shared)
proxy_fcgi_module (shared)
proxy_fdpass_module (shared)
proxy_ftp_module (shared)
proxy_http_module (shared)
proxy_hcheck_module (shared)
proxy_scgi_module (shared)
proxy_uwsgi_module (shared)
proxy_wstunnel_module (shared)
ssl_module (shared)
php7_module (shared)
$ httpd -l
Compiled in modules:
core.c
mod_so.c
http_core.c


httpd/http.d/http.conf



<IfModule mpm_prefork_module>
StartServers 2
MinSpareServers 2
MaxSpareServers 4
MaxRequestWorkers 10
MaxConnectionsPerChild 1
</IfModule>






apache-httpd memory php amazon-ec2 out-of-memory






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Dec 4 at 12:21









Shard

1065




1065












  • Might be the issue is in your php code running through worker. As when you hit php through http request then after completion of execution memory is freed automatically. But in case of worker it is continuously running, means if you are assigning any variable then its not going to free automatically. so its better to set values by null after use and also can unset array or index accordingly.
    – satya prakash patel
    Dec 4 at 13:07












  • "after completion of execution". The php isn't completed from start to finish (or crash) for about 20 hours (or less with crash). It keeps going through a loop processing all of the reports in the queue until it is done. But it doesn't remember anything from the previous loop, I've checked the memory on the PHP layer and it's fine. However apache seems to be holding onto that memory even after PHP lets it go. Setting values to null also wouldn't help. Garbage collection is working fine in the PHP layer, it's apache holding onto the memory which is the issue.
    – Shard
    Dec 4 at 14:13


















  • Might be the issue is in your php code running through worker. As when you hit php through http request then after completion of execution memory is freed automatically. But in case of worker it is continuously running, means if you are assigning any variable then its not going to free automatically. so its better to set values by null after use and also can unset array or index accordingly.
    – satya prakash patel
    Dec 4 at 13:07












  • "after completion of execution". The php isn't completed from start to finish (or crash) for about 20 hours (or less with crash). It keeps going through a loop processing all of the reports in the queue until it is done. But it doesn't remember anything from the previous loop, I've checked the memory on the PHP layer and it's fine. However apache seems to be holding onto that memory even after PHP lets it go. Setting values to null also wouldn't help. Garbage collection is working fine in the PHP layer, it's apache holding onto the memory which is the issue.
    – Shard
    Dec 4 at 14:13
















Might be the issue is in your php code running through worker. As when you hit php through http request then after completion of execution memory is freed automatically. But in case of worker it is continuously running, means if you are assigning any variable then its not going to free automatically. so its better to set values by null after use and also can unset array or index accordingly.
– satya prakash patel
Dec 4 at 13:07






Might be the issue is in your php code running through worker. As when you hit php through http request then after completion of execution memory is freed automatically. But in case of worker it is continuously running, means if you are assigning any variable then its not going to free automatically. so its better to set values by null after use and also can unset array or index accordingly.
– satya prakash patel
Dec 4 at 13:07














"after completion of execution". The php isn't completed from start to finish (or crash) for about 20 hours (or less with crash). It keeps going through a loop processing all of the reports in the queue until it is done. But it doesn't remember anything from the previous loop, I've checked the memory on the PHP layer and it's fine. However apache seems to be holding onto that memory even after PHP lets it go. Setting values to null also wouldn't help. Garbage collection is working fine in the PHP layer, it's apache holding onto the memory which is the issue.
– Shard
Dec 4 at 14:13




"after completion of execution". The php isn't completed from start to finish (or crash) for about 20 hours (or less with crash). It keeps going through a loop processing all of the reports in the queue until it is done. But it doesn't remember anything from the previous loop, I've checked the memory on the PHP layer and it's fine. However apache seems to be holding onto that memory even after PHP lets it go. Setting values to null also wouldn't help. Garbage collection is working fine in the PHP layer, it's apache holding onto the memory which is the issue.
– Shard
Dec 4 at 14:13










1 Answer
1






active

oldest

votes

















up vote
0
down vote














... three cron jobs which execute every minute to check the process queue and if anything is found they'll keep running until the queue is empty




This is prone to memory leaking so instead use a bash demon to sequentially spawn a new instance for every item in the queue;



while true ; do
while read -r F ; do
php /var/www/html/my.site.com/process1.php "$F"
done < <(find /path/to/queue -type f)
sleep 1
done


calling



wget -q -O /dev/null http://localhost/core/processing


is needlessly involving apache and allowing it to hold on to memory for more than the duration of one item.






share|improve this answer























  • Definitely don't want to try and process every item in the queue at the same time. Also the cron just calls PHP, we wouldn't give a bash script database access
    – Shard
    Dec 4 at 14:02










  • Also not really sure how the cron jobs would be prone to memory leaks, they just use wget to call the script. wget -q -O /dev/null http://localhost/core/processing.
    – Shard
    Dec 4 at 14:05










  • @Shard the example would process 1 item at a time not "[all] at the same time". it's apache/php that is prone to leaking, so one way to stop that is take the loop out of apache/php so it can only hold on to memory for the duration of one item.
    – user1133275
    Dec 4 at 23:37










  • I see, thank you. I guess it's possible. We currently have a lot of infrastructure built with PHP with various configs, batch sizes, worker counts, ect. I guess I could disable the line which attempts another batch once the previous is finished and then implement the loop in a different language like you've done here. But I know that won't fly well with anyone, and it also affects all other processing done for other servers as they use the same code, what is being processed and how is just a change of config files. So although your way would work, it's unfortunately not really viable for me
    – Shard
    2 days ago












  • I think your solution of moving away from wget could do it though, I'll try calling it more directly
    – Shard
    2 days ago











Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f485880%2fhttpd-hogging-all-memory-until-server-crash%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
0
down vote














... three cron jobs which execute every minute to check the process queue and if anything is found they'll keep running until the queue is empty




This is prone to memory leaking so instead use a bash demon to sequentially spawn a new instance for every item in the queue;



while true ; do
while read -r F ; do
php /var/www/html/my.site.com/process1.php "$F"
done < <(find /path/to/queue -type f)
sleep 1
done


calling



wget -q -O /dev/null http://localhost/core/processing


is needlessly involving apache and allowing it to hold on to memory for more than the duration of one item.






share|improve this answer























  • Definitely don't want to try and process every item in the queue at the same time. Also the cron just calls PHP, we wouldn't give a bash script database access
    – Shard
    Dec 4 at 14:02










  • Also not really sure how the cron jobs would be prone to memory leaks, they just use wget to call the script. wget -q -O /dev/null http://localhost/core/processing.
    – Shard
    Dec 4 at 14:05










  • @Shard the example would process 1 item at a time not "[all] at the same time". it's apache/php that is prone to leaking, so one way to stop that is take the loop out of apache/php so it can only hold on to memory for the duration of one item.
    – user1133275
    Dec 4 at 23:37










  • I see, thank you. I guess it's possible. We currently have a lot of infrastructure built with PHP with various configs, batch sizes, worker counts, ect. I guess I could disable the line which attempts another batch once the previous is finished and then implement the loop in a different language like you've done here. But I know that won't fly well with anyone, and it also affects all other processing done for other servers as they use the same code, what is being processed and how is just a change of config files. So although your way would work, it's unfortunately not really viable for me
    – Shard
    2 days ago












  • I think your solution of moving away from wget could do it though, I'll try calling it more directly
    – Shard
    2 days ago















up vote
0
down vote














... three cron jobs which execute every minute to check the process queue and if anything is found they'll keep running until the queue is empty




This is prone to memory leaking so instead use a bash demon to sequentially spawn a new instance for every item in the queue;



while true ; do
while read -r F ; do
php /var/www/html/my.site.com/process1.php "$F"
done < <(find /path/to/queue -type f)
sleep 1
done


calling



wget -q -O /dev/null http://localhost/core/processing


is needlessly involving apache and allowing it to hold on to memory for more than the duration of one item.






share|improve this answer























  • Definitely don't want to try and process every item in the queue at the same time. Also the cron just calls PHP, we wouldn't give a bash script database access
    – Shard
    Dec 4 at 14:02










  • Also not really sure how the cron jobs would be prone to memory leaks, they just use wget to call the script. wget -q -O /dev/null http://localhost/core/processing.
    – Shard
    Dec 4 at 14:05










  • @Shard the example would process 1 item at a time not "[all] at the same time". it's apache/php that is prone to leaking, so one way to stop that is take the loop out of apache/php so it can only hold on to memory for the duration of one item.
    – user1133275
    Dec 4 at 23:37










  • I see, thank you. I guess it's possible. We currently have a lot of infrastructure built with PHP with various configs, batch sizes, worker counts, ect. I guess I could disable the line which attempts another batch once the previous is finished and then implement the loop in a different language like you've done here. But I know that won't fly well with anyone, and it also affects all other processing done for other servers as they use the same code, what is being processed and how is just a change of config files. So although your way would work, it's unfortunately not really viable for me
    – Shard
    2 days ago












  • I think your solution of moving away from wget could do it though, I'll try calling it more directly
    – Shard
    2 days ago













up vote
0
down vote










up vote
0
down vote










... three cron jobs which execute every minute to check the process queue and if anything is found they'll keep running until the queue is empty




This is prone to memory leaking so instead use a bash demon to sequentially spawn a new instance for every item in the queue;



while true ; do
while read -r F ; do
php /var/www/html/my.site.com/process1.php "$F"
done < <(find /path/to/queue -type f)
sleep 1
done


calling



wget -q -O /dev/null http://localhost/core/processing


is needlessly involving apache and allowing it to hold on to memory for more than the duration of one item.






share|improve this answer















... three cron jobs which execute every minute to check the process queue and if anything is found they'll keep running until the queue is empty




This is prone to memory leaking so instead use a bash demon to sequentially spawn a new instance for every item in the queue;



while true ; do
while read -r F ; do
php /var/www/html/my.site.com/process1.php "$F"
done < <(find /path/to/queue -type f)
sleep 1
done


calling



wget -q -O /dev/null http://localhost/core/processing


is needlessly involving apache and allowing it to hold on to memory for more than the duration of one item.







share|improve this answer














share|improve this answer



share|improve this answer








edited Dec 4 at 23:45

























answered Dec 4 at 13:47









user1133275

2,713415




2,713415












  • Definitely don't want to try and process every item in the queue at the same time. Also the cron just calls PHP, we wouldn't give a bash script database access
    – Shard
    Dec 4 at 14:02










  • Also not really sure how the cron jobs would be prone to memory leaks, they just use wget to call the script. wget -q -O /dev/null http://localhost/core/processing.
    – Shard
    Dec 4 at 14:05










  • @Shard the example would process 1 item at a time not "[all] at the same time". it's apache/php that is prone to leaking, so one way to stop that is take the loop out of apache/php so it can only hold on to memory for the duration of one item.
    – user1133275
    Dec 4 at 23:37










  • I see, thank you. I guess it's possible. We currently have a lot of infrastructure built with PHP with various configs, batch sizes, worker counts, ect. I guess I could disable the line which attempts another batch once the previous is finished and then implement the loop in a different language like you've done here. But I know that won't fly well with anyone, and it also affects all other processing done for other servers as they use the same code, what is being processed and how is just a change of config files. So although your way would work, it's unfortunately not really viable for me
    – Shard
    2 days ago












  • I think your solution of moving away from wget could do it though, I'll try calling it more directly
    – Shard
    2 days ago


















  • Definitely don't want to try and process every item in the queue at the same time. Also the cron just calls PHP, we wouldn't give a bash script database access
    – Shard
    Dec 4 at 14:02










  • Also not really sure how the cron jobs would be prone to memory leaks, they just use wget to call the script. wget -q -O /dev/null http://localhost/core/processing.
    – Shard
    Dec 4 at 14:05










  • @Shard the example would process 1 item at a time not "[all] at the same time". it's apache/php that is prone to leaking, so one way to stop that is take the loop out of apache/php so it can only hold on to memory for the duration of one item.
    – user1133275
    Dec 4 at 23:37










  • I see, thank you. I guess it's possible. We currently have a lot of infrastructure built with PHP with various configs, batch sizes, worker counts, ect. I guess I could disable the line which attempts another batch once the previous is finished and then implement the loop in a different language like you've done here. But I know that won't fly well with anyone, and it also affects all other processing done for other servers as they use the same code, what is being processed and how is just a change of config files. So although your way would work, it's unfortunately not really viable for me
    – Shard
    2 days ago












  • I think your solution of moving away from wget could do it though, I'll try calling it more directly
    – Shard
    2 days ago
















Definitely don't want to try and process every item in the queue at the same time. Also the cron just calls PHP, we wouldn't give a bash script database access
– Shard
Dec 4 at 14:02




Definitely don't want to try and process every item in the queue at the same time. Also the cron just calls PHP, we wouldn't give a bash script database access
– Shard
Dec 4 at 14:02












Also not really sure how the cron jobs would be prone to memory leaks, they just use wget to call the script. wget -q -O /dev/null http://localhost/core/processing.
– Shard
Dec 4 at 14:05




Also not really sure how the cron jobs would be prone to memory leaks, they just use wget to call the script. wget -q -O /dev/null http://localhost/core/processing.
– Shard
Dec 4 at 14:05












@Shard the example would process 1 item at a time not "[all] at the same time". it's apache/php that is prone to leaking, so one way to stop that is take the loop out of apache/php so it can only hold on to memory for the duration of one item.
– user1133275
Dec 4 at 23:37




@Shard the example would process 1 item at a time not "[all] at the same time". it's apache/php that is prone to leaking, so one way to stop that is take the loop out of apache/php so it can only hold on to memory for the duration of one item.
– user1133275
Dec 4 at 23:37












I see, thank you. I guess it's possible. We currently have a lot of infrastructure built with PHP with various configs, batch sizes, worker counts, ect. I guess I could disable the line which attempts another batch once the previous is finished and then implement the loop in a different language like you've done here. But I know that won't fly well with anyone, and it also affects all other processing done for other servers as they use the same code, what is being processed and how is just a change of config files. So although your way would work, it's unfortunately not really viable for me
– Shard
2 days ago






I see, thank you. I guess it's possible. We currently have a lot of infrastructure built with PHP with various configs, batch sizes, worker counts, ect. I guess I could disable the line which attempts another batch once the previous is finished and then implement the loop in a different language like you've done here. But I know that won't fly well with anyone, and it also affects all other processing done for other servers as they use the same code, what is being processed and how is just a change of config files. So although your way would work, it's unfortunately not really viable for me
– Shard
2 days ago














I think your solution of moving away from wget could do it though, I'll try calling it more directly
– Shard
2 days ago




I think your solution of moving away from wget could do it though, I'll try calling it more directly
– Shard
2 days ago


















draft saved

draft discarded




















































Thanks for contributing an answer to Unix & Linux Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f485880%2fhttpd-hogging-all-memory-until-server-crash%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Accessing regular linux commands in Huawei's Dopra Linux

Can't connect RFCOMM socket: Host is down

Kernel panic - not syncing: Fatal Exception in Interrupt