Skip to content

Memory issue with code base of Feb24 #36

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
mimugmail opened this issue Mar 2, 2017 · 20 comments
Closed

Memory issue with code base of Feb24 #36

mimugmail opened this issue Mar 2, 2017 · 20 comments
Assignees

Comments

@mimugmail
Copy link

Hi,

I just ran Nginx+ with the code base of Feb24 on my backup machine. It just listen on port 80 for one IP and not in production. Now my monitoring system told me about full memory and indeed, all my 64GB RAM were gone and nginx restarted itself:

2017/03/02 14:29:49 [notice] 738#738: signal 17 (SIGCHLD) received
2017/03/02 14:29:49 [alert] 738#738: worker process 22623 exited on signal 9
2017/03/02 14:29:49 [notice] 738#738: start worker process 26684
2017/03/02 14:29:49 [notice] 738#738: signal 29 (SIGIO) received
2017/03/02 14:29:49 [notice] 738#738: signal 17 (SIGCHLD) received
2017/03/02 14:29:49 [alert] 738#738: worker process 22624 exited on signal 9
2017/03/02 14:29:49 [alert] 738#738: worker process 22625 exited on signal 9
2017/03/02 14:29:49 [alert] 738#738: worker process 22627 exited on signal 9
2017/03/02 14:29:49 [notice] 738#738: start worker process 26685
2017/03/02 14:29:49 [notice] 738#738: start worker process 26686
2017/03/02 14:29:49 [notice] 738#738: start worker process 26687
2017/03/02 14:29:49 [notice] 738#738: signal 29 (SIGIO) received
2017/03/02 14:29:51 [notice] 738#738: signal 17 (SIGCHLD) received
2017/03/02 14:29:51 [alert] 738#738: worker process 22643 exited on signal 9
2017/03/02 14:29:51 [notice] 738#738: start worker process 26688
2017/03/02 14:29:51 [notice] 738#738: signal 29 (SIGIO) received

I grabbed the logs of nginx but the last connect was at 11:39:02.

I had the same issue two days ago after benchmarking with "ab". I thought this was the cause so I restarted nginx, did a benchmark again and waited. Now after two days I have the same memory error again.

Sadly I cannot reproduce it.

Someone running the same code base and seeing this issue?

@mimugmail
Copy link
Author

@defanator do you have a test machine where you can run nginx over some days and monitor the memory? Recently the memory of my standby machine filled up again and nginx restarted itself

waf-mem

@defanator
Copy link
Collaborator

defanator commented Mar 5, 2017

@mimugmail, first, nginx never "restarts itself" (perhaps it was OOM killer?)

Second - sure, I can set up an instance and leave it running for a few days, but it would be good to have more precise instructions on steps how to reproduce the scenario (system configuration, nginx configuration, any load patterns).

Finally, what makes you think that the reason is in nginx?

@mimugmail
Copy link
Author

Hm, you're right. I started a second machine with a mostly empty config and just CRS3 rules.
When it occurs there too I'll disable MS3 and wait again.

Will keep you updated ..

@mimugmail
Copy link
Author

Ok, first results:

With code base of 24th, nginx 1.11.9 and a really basic nginx configuration there's no issue.

On the other side is a the same system (OS Debian 8) but with N+, so module is compiled against 1.11.5 and the configuration is quite huge (multile balancers with plenty of stages and around 30 virtual hosts). Last day I just killed the nginx process and didn't start it again. Now the memory blowouts are gone. Next step is to just disable modsecurity on the virtual hosts, but since the config is synced regulary from the master I have to check with my colleagues ..

@zimmerle
Copy link
Contributor

zimmerle commented Mar 9, 2017

Hi @mimugmail,

Any news yet?

@mimugmail
Copy link
Author

Yesterday at 14:00 I started N+ with 33 virtual hosts, MS3 disabled, but module loaded via global config. No memissue yet.

Next test is to enable MS3 again and drop external connections to be sure that there's no external process causing the issue.

@mimugmail
Copy link
Author

@zimmerle Sorry for the delay, I know started nginx with the live config bug it only listens to IP addresses which are bound to the current master, so it's just running and no connecting client can be responsible for any mem issues

@mimugmail
Copy link
Author

Till now there was no memory issue running N+ with MS3 but not listening to any running IP.
I edited a virutal host with the current IP address of the backup system and did a reopen.

Issue shoul be back within the next few hours ...

@mimugmail
Copy link
Author

@zimmerle
It took zwo hours:
2017/03/18 09:48:47 [notice] 15175#15175: reopening logs
2017/03/18 09:48:47 [notice] 15176#15176: reopening logs
2017/03/18 09:48:47 [notice] 15188#15188: reopening logs
2017/03/18 09:48:47 [notice] 15184#15184: reopening logs
2017/03/18 09:48:47 [notice] 15186#15186: reopening logs
2017/03/18 09:48:47 [notice] 15187#15187: reopening logs
2017/03/18 11:32:46 [notice] 29425#29425: signal 17 (SIGCHLD) received
2017/03/18 11:32:46 [alert] 29425#29425: worker process 15186 exited on signal 9
2017/03/18 11:32:46 [notice] 29425#29425: start worker process 16863
2017/03/18 11:32:46 [notice] 29425#29425: signal 29 (SIGIO) received
2017/03/18 12:49:43 [notice] 29425#29425: signal 17 (SIGCHLD) received
2017/03/18 12:49:43 [alert] 29425#29425: worker process 16863 exited on signal 9
2017/03/18 12:49:43 [notice] 29425#29425: start worker process 18001
2017/03/18 12:49:44 [notice] 29425#29425: signal 29 (SIGIO) received
2017/03/18 13:05:05 [notice] 29425#29425: signal 17 (SIGCHLD) received
2017/03/18 13:05:05 [alert] 29425#29425: worker process 15188 exited on signal 9
2017/03/18 13:05:05 [notice] 29425#29425: start worker process 18233
2017/03/18 13:05:06 [notice] 29425#29425: signal 29 (SIGIO) received
2017/03/18 15:40:15 [notice] 29425#29425: signal 17 (SIGCHLD) received
2017/03/18 15:40:15 [alert] 29425#29425: worker process 15184 exited on signal 9
2017/03/18 15:40:15 [notice] 29425#29425: start worker process 20583

In this time there were only 4 accesses (first three my tests and one "client"/bot/whatever)
81.24.x - - [18/Mar/2017:09:48:49 +0100] "GET / HTTP/1.1" 403 564 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36"
81.24.x - - [18/Mar/2017:09:49:13 +0100] "GET / HTTP/1.1" 403 564 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36"
81.24.x - - [18/Mar/2017:09:49:20 +0100] "GET / HTTP/1.1" 403 564 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36"
157.52.x - - [18/Mar/2017:10:55:40 +0100] "GET / HTTP/1.1" 403 564 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)"

@defanator
Copy link
Collaborator

@mimugmail, could you please share system logs? Were there any messages from OOM killer?

Do you have any ps or top output showing how much memory nginx workers were consuming?

Could you please also share the entire nginx (nginx -T) and modsecurity configuration?

@mimugmail
Copy link
Author

@defanator Yep, here's the output

[3619581.644080] nginx invoked oom-killer: gfp_mask=0x24280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=0-1, order=0, oom_score_adj=0
[3619581.644082] nginx cpuset=/ mems_allowed=0-1
[3619581.644087] CPU: 2 PID: 13918 Comm: nginx Not tainted 4.9.0-0.bpo.1-amd64 #1 Debian 4.9.2-2~bpo8+1
[3619581.644088] Hardware name: FUJITSU PRIMERGY RX2530 M2/D3279-B1, BIOS V5.0.0.11 R1.7.0 for D3279-B1x 04/21/2016
[3619581.644089] 0000000000000000 ffffffff8c72a1f5 ffffb820aba67ce0 ffff8bdd18557000
[3619581.644092] ffffffff8c5fef45 0000000000000000 0000000000000000 ffff8bd51fc987c0
[3619581.644094] ffff8bdd18557080 ffffffff8c42476b ffff8bd53fffccf0 ffffffff8c586a23
[3619581.644097] Call Trace:
[3619581.644105] [] ? dump_stack+0x5c/0x77
[3619581.644109] [] ? dump_header+0x85/0x212
[3619581.644113] [] ? __switch_to+0x2bb/0x700
[3619581.644117] [] ? get_page_from_freelist+0x113/0xad0
[3619581.644118] [] ? oom_kill_process+0x228/0x3e0
[3619581.644123] [] ? has_capability_noaudit+0x1a/0x20
[3619581.644125] [] ? oom_badness+0xee/0x160
[3619581.644126] [] ? out_of_memory+0x10c/0x4b0
[3619581.644128] [] ? __alloc_pages_slowpath+0xa7e/0xaa0
[3619581.644130] [] ? __alloc_pages_nodemask+0x29b/0x2e0
[3619581.644133] [] ? alloc_pages_vma+0xc1/0x240
[3619581.644135] [] ? handle_mm_fault+0x1417/0x1650
[3619581.644138] [] ? mprotect_fixup+0x142/0x270
[3619581.644141] [] ? __do_page_fault+0x253/0x510
[3619581.644145] [] ? page_fault+0x28/0x30
[3619581.644146] Mem-Info:
[3619581.644154] active_anon:15275784 inactive_anon:898573 isolated_anon:0
active_file:247 inactive_file:840 isolated_file:0
unevictable:0 dirty:0 writeback:248 unstable:0
slab_reclaimable:4687 slab_unreclaimable:16862
mapped:2144 shmem:1621 pagetables:60342 bounce:0
free:56615 free_pcp:6120 free_cma:0
[3619581.644159] Node 0 active_anon:30138772kB inactive_anon:1772964kB active_file:580kB inactive_file:656kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:4676kB dirty:0kB writeback:0kB shmem:0kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 4672kB writeback_tmp:0kB unstable:0kB pages_scanned:182750 all_unreclaimable? yes
[3619581.644163] Node 1 active_anon:30964364kB inactive_anon:1821328kB active_file:408kB inactive_file:2704kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:3900kB dirty:0kB writeback:992kB shmem:0kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 1812kB writeback_tmp:0kB unstable:0kB pages_scanned:54592 all_unreclaimable? yes
[3619581.644165] Node 0 DMA free:15888kB min:20kB low:32kB high:44kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15980kB managed:15896kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:8kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[3619581.644169] lowmem_reserve[]: 0 1632 31826 31826 31826
[3619581.644172] Node 0 DMA32 free:123020kB min:2292kB low:3960kB high:5628kB active_anon:1558940kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:1753836kB managed:1688268kB mlocked:0kB slab_reclaimable:56kB slab_unreclaimable:4kB kernel_stack:0kB pagetables:3052kB bounce:0kB free_pcp:164kB local_pcp:44kB free_cma:0kB
[3619581.644176] lowmem_reserve[]: 0 0 30193 30193 30193
[3619581.644179] Node 0 Normal free:42292kB min:42448kB low:73364kB high:104280kB active_anon:28579832kB inactive_anon:1772964kB active_file:580kB inactive_file:656kB unevictable:0kB writepending:0kB present:31457280kB managed:30923148kB mlocked:0kB slab_reclaimable:12712kB slab_unreclaimable:43556kB kernel_stack:5096kB pagetables:159364kB bounce:0kB free_pcp:10152kB local_pcp:708kB free_cma:0kB
[3619581.644182] lowmem_reserve[]: 0 0 0 0 0
[3619581.644185] Node 1 Normal free:45260kB min:45344kB low:78368kB high:111392kB active_anon:30964364kB inactive_anon:1821328kB active_file:408kB inactive_file:2704kB unevictable:0kB writepending:992kB present:33554432kB managed:33026364kB mlocked:0kB slab_reclaimable:5980kB slab_unreclaimable:23880kB kernel_stack:3624kB pagetables:78952kB bounce:0kB free_pcp:14164kB local_pcp:204kB free_cma:0kB
[3619581.644189] lowmem_reserve[]: 0 0 0 0 0
[3619581.644191] Node 0 DMA: 24kB (U) 18kB (U) 016kB 032kB 264kB (U) 1128kB (U) 1256kB (U) 0512kB 11024kB (U) 12048kB (M) 34096kB (M) = 15888kB
[3619581.644200] Node 0 DMA32: 9
4kB (UE) 118kB (UME) 716kB (UME) 1132kB (UM) 764kB (U) 5128kB (UME) 2256kB (UE) 2512kB (UE) 11024kB (E) 22048kB (ME) 284096kB (UM) = 123020kB
[3619581.644211] Node 0 Normal: 17734kB (UME) 3758kB (UME) 23616kB (UME) 14432kB (UME) 7964kB (ME) 45128kB (E) 22256kB (ME) 11512kB (UME) 21024kB (UM) 02048kB 04096kB = 42604kB
[3619581.644221] Node 1 Normal: 280
4kB (ME) 1838kB (E) 17716kB (ME) 11732kB (ME) 7964kB (ME) 36128kB (E) 19256kB (UME) 8512kB (UME) 31024kB (E) 12048kB (M) 34096kB (UM) = 45192kB
[3619581.644232] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[3619581.644233] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[3619581.644234] Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[3619581.644235] Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[3619581.644235] 4438 total pagecache pages
[3619581.644236] 1264 pages in swap cache
[3619581.644237] Swap cache stats: add 85459517, delete 85458253, find 21319700/24401710
[3619581.644238] Free swap = 0kB
[3619581.644239] Total swap = 7798780kB
[3619581.644239] 16695382 pages RAM
[3619581.644240] 0 pages HighMem/MovableOnly
[3619581.644240] 281963 pages reserved
[3619581.644241] 0 pages hwpoisoned
[3619581.644241] [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name
[3619581.644251] [ 1113] 0 1113 10368 1 23 3 258 -1000 systemd-udevd
[3619581.644253] [ 1125] 0 1125 20526 1151 46 3 51 0 systemd-journal
[3619581.644255] [ 1841] 0 1841 6869 10 19 3 51 0 cron
[3619581.644256] [ 1842] 0 1842 4756 0 15 3 52 0 atd
[3619581.644257] [ 1845] 0 1845 4964 0 15 3 79 0 systemd-logind
[3619581.644259] [ 1860] 105 1860 10531 1 26 3 101 -900 dbus-daemon
[3619581.644260] [ 1871] 113 1871 21918 1 43 3 249 0 zabbix_agentd
[3619581.644261] [ 1880] 113 1880 21918 572 44 3 232 0 zabbix_agentd
[3619581.644263] [ 1881] 113 1881 21918 34 43 3 244 0 zabbix_agentd
[3619581.644264] [ 1882] 113 1882 21918 34 43 3 244 0 zabbix_agentd
[3619581.644265] [ 1883] 113 1883 21918 34 43 3 244 0 zabbix_agentd
[3619581.644267] [ 1884] 113 1884 21918 55 44 3 249 0 zabbix_agentd
[3619581.644268] [ 1946] 110 1946 8346 16 22 3 144 0 ntpd
[3619581.644269] [ 1949] 0 1949 64668 47 28 3 534 0 rsyslogd
[3619581.644271] [ 1950] 0 1950 12893 13 25 3 135 0 keepalived
[3619581.644272] [ 1952] 0 1952 14491 13 30 3 175 0 keepalived
[3619581.644273] [ 1953] 0 1953 14460 22 30 3 155 0 keepalived
[3619581.644275] [ 1955] 0 1955 1064 1 8 3 36 0 acpid
[3619581.644276] [ 1962] 0 1962 4926 91 15 3 65 0 irqbalance
[3619581.644277] [ 2126] 0 2126 9042 0 23 3 144 0 master
[3619581.644279] [ 2131] 108 2131 9600 1 25 3 157 0 qmgr
[3619581.644280] [ 2232] 0 2232 3604 1 12 3 36 0 agetty
[3619581.644282] [22529] 0 22529 6481 4 18 3 156 0 smartd
[3619581.644285] [22663] 0 22663 20682 57 44 3 223 0 sshd
[3619581.644286] [22665] 0 22665 5889 136 16 3 412 0 bash
[3619581.644288] [17566] 0 17566 20682 18 44 3 208 0 sshd
[3619581.644289] [17568] 0 17568 5819 1 15 3 479 0 bash
[3619581.644291] [29425] 0 29425 356683 10 663 5 293804 0 nginx
[3619581.644292] [15150] 109 15150 356686 0 657 5 293823 0 nginx
[3619581.644294] [15151] 109 15151 356686 0 657 5 293829 0 nginx
[3619581.644295] [15152] 109 15152 356686 0 657 5 293815 0 nginx
[3619581.644296] [15153] 109 15153 356686 0 657 5 293818 0 nginx
[3619581.644298] [15154] 109 15154 356686 0 657 5 293814 0 nginx
[3619581.644299] [15155] 109 15155 356686 0 657 5 293834 0 nginx
[3619581.644300] [15156] 109 15156 356686 0 657 5 293808 0 nginx
[3619581.644302] [15157] 109 15157 356686 0 657 5 293814 0 nginx
[3619581.644303] [15158] 109 15158 356686 0 657 5 293815 0 nginx
[3619581.644304] [15159] 109 15159 356686 0 657 5 293816 0 nginx
[3619581.644305] [15160] 109 15160 356686 0 657 5 293835 0 nginx
[3619581.644307] [15161] 109 15161 356686 0 657 5 293829 0 nginx
[3619581.644308] [15162] 109 15162 356686 0 657 5 293837 0 nginx
[3619581.644309] [15163] 109 15163 356686 0 657 5 293832 0 nginx
[3619581.644310] [15164] 109 15164 356686 10 657 5 293796 0 nginx
[3619581.644311] [15165] 109 15165 356686 6 657 5 293800 0 nginx
[3619581.644313] [15166] 109 15166 356686 0 657 5 293824 0 nginx
[3619581.644314] [15167] 109 15167 356686 0 657 5 293839 0 nginx
[3619581.644315] [15168] 109 15168 356686 1 659 5 293828 0 nginx
[3619581.644316] [15169] 109 15169 356686 0 657 5 293832 0 nginx
[3619581.644318] [15170] 109 15170 356686 0 657 5 293833 0 nginx
[3619581.644319] [15173] 109 15173 356686 0 657 5 293818 0 nginx
[3619581.644320] [15174] 109 15174 356686 1 659 5 293812 0 nginx
[3619581.644322] [15175] 109 15175 356686 0 657 5 293835 0 nginx
[3619581.644323] [15176] 109 15176 356686 0 657 5 293814 0 nginx
[3619581.644324] [15177] 109 15177 356686 0 657 5 293828 0 nginx
[3619581.644326] [15180] 109 15180 356686 0 657 5 293832 0 nginx
[3619581.644327] [15182] 109 15182 356686 0 657 5 293826 0 nginx
[3619581.644329] [15419] 0 15419 13796 0 32 3 173 -1000 sshd
[3619581.644330] [34128] 0 34128 4884 134 15 3 26 0 conntrackd
[3619581.644331] [34174] 0 34174 30353 12 58 3 450 0 proftpd
[3619581.644333] [38217] 109 38217 356686 0 657 5 293822 0 nginx
[3619581.644334] [38220] 109 38220 356686 0 657 5 293806 0 nginx
[3619581.644335] [40328] 109 40328 356686 1 659 5 293815 0 nginx
[3619581.644336] [ 810] 109 810 356686 0 657 5 293826 0 nginx
[3619581.644338] [ 4385] 109 4385 356686 0 657 5 293812 0 nginx
[3619581.644339] [10441] 109 10441 356686 26 657 5 293782 0 nginx
[3619581.644340] [10442] 109 10442 356686 14 657 5 293794 0 nginx
[3619581.644342] [13917] 109 13917 356686 278 659 5 293651 0 nginx
[3619581.644343] [13918] 109 13918 17990351 16170643 35121 72 1768405 0 nginx
[3619581.644344] [19063] 108 19063 9558 0 24 3 140 0 pickup
[3619581.644346] Out of memory: Kill process 13918 (nginx) score 978 or sacrifice child
[3619581.645898] Killed process 13918 (nginx) total-vm:71961404kB, anon-rss:64681904kB, file-rss:640kB, shmem-rss:28kB
[3619583.731282] oom_reaper: reaped process 13918 (nginx), now anon-rss:0kB, file-rss:0kB, shmem-rss:28kB

ps aufx:
root 29425 0.0 0.0 1426732 960 ? Ss Mar13 0:11 nginx: master process nginx
nginx 15150 0.0 0.0 1426744 588 ? S Mar18 0:00 _ nginx: worker process
nginx 15151 0.0 0.0 1426744 572 ? S Mar18 0:00 _ nginx: worker process
nginx 15152 0.0 0.0 1426744 716 ? S Mar18 0:00 _ nginx: worker process
nginx 15153 0.0 0.0 1426744 588 ? S Mar18 0:00 _ nginx: worker process
nginx 15154 0.0 0.0 1426744 816 ? S Mar18 0:00 _ nginx: worker process
nginx 15155 0.0 0.0 1426744 196 ? S Mar18 0:00 _ nginx: worker process
nginx 15156 0.0 0.0 1426744 824 ? S Mar18 0:00 _ nginx: worker process
nginx 15157 0.0 0.0 1426744 720 ? S Mar18 0:00 _ nginx: worker process
nginx 15158 0.0 0.0 1426744 816 ? S Mar18 0:00 _ nginx: worker process
nginx 15159 0.0 0.0 1426744 732 ? S Mar18 0:00 _ nginx: worker process
nginx 15160 0.0 0.0 1426744 216 ? S Mar18 0:00 _ nginx: worker process
nginx 15161 0.0 0.0 1426744 568 ? S Mar18 0:00 _ nginx: worker process
nginx 15162 0.0 0.0 1426744 140 ? S Mar18 0:00 _ nginx: worker process
nginx 15163 0.0 0.0 1426744 464 ? S Mar18 0:00 _ nginx: worker process
nginx 15164 0.0 0.0 1426744 44 ? S Mar18 0:00 _ nginx: worker process
nginx 15165 0.0 0.0 1426744 28 ? S Mar18 0:00 _ nginx: worker process
nginx 15166 0.0 0.0 1426744 600 ? S Mar18 0:00 _ nginx: worker process
nginx 15167 0.0 0.0 1426744 136 ? S Mar18 0:00 _ nginx: worker process
nginx 15168 0.0 0.0 1426744 128 ? S Mar18 0:00 _ nginx: worker process
nginx 15169 0.0 0.0 1426744 524 ? S Mar18 0:00 _ nginx: worker process
nginx 15170 0.0 0.0 1426744 396 ? S Mar18 0:00 _ nginx: worker process
nginx 15173 0.0 0.0 1426744 704 ? S Mar18 0:00 _ nginx: worker process
nginx 15174 0.0 0.0 1426744 640 ? S Mar18 0:00 _ nginx: worker process
nginx 15175 0.0 0.0 1426744 368 ? S Mar18 0:00 _ nginx: worker process
nginx 15176 0.0 0.0 1426744 780 ? S Mar18 0:00 _ nginx: worker process
nginx 15177 0.0 0.0 1426744 560 ? S Mar18 0:00 _ nginx: worker process
nginx 15180 0.0 0.0 1426744 588 ? S Mar18 0:00 _ nginx: worker process
nginx 15182 0.0 0.0 1426744 584 ? S Mar18 0:00 _ nginx: worker process
nginx 38217 0.0 0.0 1426744 0 ? S Mar19 0:00 _ nginx: worker process
nginx 38220 0.0 0.0 1426744 4 ? S Mar19 0:00 _ nginx: worker process
nginx 40328 0.0 0.0 1426744 800 ? S Mar19 0:00 _ nginx: worker process
nginx 810 0.0 0.0 1426744 576 ? S Mar19 0:00 _ nginx: worker process
nginx 4385 0.0 0.0 1426744 772 ? S Mar19 0:00 _ nginx: worker process
nginx 10441 0.0 0.0 1426744 108 ? S 00:44 0:00 _ nginx: worker process
nginx 10442 0.0 0.0 1426744 60 ? S 00:44 0:00 _ nginx: worker process
nginx 13917 0.3 0.0 1426744 1116 ? S 04:34 1:18 _ nginx: worker process
nginx 20277 0.0 0.0 1426744 4756 ? S 11:18 0:00 _ nginx: worker process

Can I send you the nginx -T via N+ commercial support and you close the ticket again? Since I don't want to post here very sensitive data, but I also know MS3 in a scenario like this is not officially supported via N+ support.

@defanator
Copy link
Collaborator

@mimugmail, is it happening with the "code base of Feb 24"? May I ask you to try the latest nginx-plus-module-modsecurity package then, or update ModSecurity + ModSecurity-nginx to v3/master and master branches, correspondingly.

If the issue will persist, we will proceed with further investigation.

@mimugmail
Copy link
Author

@defanator Yes, it's still 24th, so I'll update it now to latest v3/master.
Before I update the system from R11 to R12, was there really a change in the codebase of nginx-plus-module-modsecurity or was it only recompiled for compatibility with R12?

@defanator
Copy link
Collaborator

@mimugmail, it was recompiled from sources as of Mar, 10 (after v3/dev/parser was merged into v3/master).

@mimugmail
Copy link
Author

@defanator Ooooh 👍 Wasn't informed by N+ team, great news!
I'll switch back from source to package at on my backup machine, really great news! :)

@defanator
Copy link
Collaborator

@mimugmail, sure - please share your results here.

@mimugmail
Copy link
Author

@defanator With the package it runs very smoothly 👍
I'll try to put this machine into production for a couple of hours when my team is fully available.

@mimugmail
Copy link
Author

mimugmail commented Mar 21, 2017

@defanator Do you know if there's a new verison of the N+ documentation (WAF guide). I switched and had massive blocks for rules 912100, 912110 and 910130. My thought was that with the current code base I don't have to disable certain rule types.

@defanator
Copy link
Collaborator

@mimugmail, we're working on the documentation. You're correct, current codebase works with OWASP CRS v3.0.0 as is (no manual removing any of pre-configured rules is required).

Please feel free to use N+ support channel for such kind of questions, leaving this thread focused on the memory issue - thank you!

@mimugmail
Copy link
Author

@defanator Yep, so this issue seems to be fixed now, you can close this one. Thanks for your efforts 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants