Skip to content

nginx-connector and 502 responses #283

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
cello86 opened this issue Jun 29, 2022 · 5 comments
Closed

nginx-connector and 502 responses #283

cello86 opened this issue Jun 29, 2022 · 5 comments

Comments

@cello86
Copy link

cello86 commented Jun 29, 2022

Hi All,
we have installed nginx 1.21.x with the latest version of nginx-connector + mod_security and we noticed that in some cases some POST requests are checked by mod_security but the client receive 502 or 499 without issues or errors into the logs.

We tried to disable the mod_sec configuration and it works fine.

Could you help us to identify the root cause of this issue?

Thanks,
Marcello

@martinhsv
Copy link
Contributor

martinhsv commented Jun 30, 2022

Without more information it's not possible to be certain what is happening, however ...

One thing that may be worth checking immediately is a PCRE conflict. Beginning with 1.21.5, nginx uses PCRE2 by default.

Either you have to:
a) use --without-pcre2 when building the connector (see #261 (comment)_
OR
b) build ModSecurity v3.0.7 using --with-pcre2 (see owasp-modsecurity/ModSecurity#2719 )

If nothing along those lines seems promising, please provide additional information. Have a look at the bug template, which mentions things like DebugLog output (at level 9).

@cello86
Copy link
Author

cello86 commented Jun 30, 2022

Hi @martinhsv,
we're using these products version:

  • NGINX_VERSION=1.21.6
  • MODSECURITY_VERSION=3.0.5
  • MODSECURITY_NGINX_VERSION=1.0.2
  • OWASP_CRS_VERSION=3.2.0

We compiled nginx and modsec with pcre and not pcre2 and the upload remained in a hang status and the entire nginx instance remained in hang for one minute. The strace command reports these entries repated:

write(8, "[1656597606] [/v1/intranet/publi"..., 127) = 127
fcntl(8, F_SETLKW, {l_type=F_UNLCK, l_whence=SEEK_SET, l_start=0, l_len=0}) = 0
fcntl(8, F_SETLKW, {l_type=F_WRLCK, l_whence=SEEK_SET, l_start=0, l_len=0}) = 0
write(8, "[1656597606] [/v1/intranet/publi"..., 127) = 127
fcntl(8, F_SETLKW, {l_type=F_UNLCK, l_whence=SEEK_SET, l_start=0, l_len=0}) = 0

The access log reported:

127.0.0.1 56933 [30/Jun/2022:16:04:36 +0200] "POST /v1/intranet/public/uploads/12155 HTTP/1.1" 502 165 79.028 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36" "-" v3 "auth=- xid=-" TLSv1.3 TLS_AES_256_GCM_SHA384 "-"

@cello86
Copy link
Author

cello86 commented Jun 30, 2022

modsec_debug.log.gz

@martinhsv
Copy link
Contributor

Hello @cello86 ,

Apologies for the delay in getting back to this.

I did have a look at the debug log that you provided. There's a lot going on there. There are 26 HTTP transactions altogether: 13 OPTIONS, 5 HEAD, and 8 POST.

Are all of those HTTP requests initiated from a single client action? (The timing may be consistent with that since the first and last transactions are only 436 ms apart.)

The OPTIONS and HEAD transactions don't show any obvious anomalies.

The first two POST transactions show normal flow (including RESPONSE_HEADERS, RESPONSE_BODY, and LOGGING). One of those two looks like it wrote to the audit log -- was there output there? And it would be interesting to know if the client received reasonable HTTP responses for both of those.)

The remaining six POST transactions do not show activity after the REQUEST_BODY phase -- which is at least a symptom of what you have reported. If that is how many nginx worker processes you have, that would be consistent with your report 'and the entire nginx instance remained in hang for one minute'

Did the uploaded file(s) get written to disk?

A few things you could consider experimenting with or looking into:

  • does the problem only manifest with very large files? Or even very small files? (Does it always occur for a particular upload location? Or intermittently?)
  • have you only seen the problem with that request Content-Type? (application/offset+octet-stream) or do you see it in other cases as well?
  • have you tried looking at nginx's own error.log file with elevated logging levels (e.g. 'debug' rather than 'warn' or 'error')?
  • since your strace output might suggest a file-locking issue, you could try and see if audit log writing is itself causing a problem (turn off audit logging, use Concurrent instead, or try using the HTTPS option)

@martinhsv
Copy link
Contributor

Was there anything further on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants