Nginx is best high performance server in today world. There is no match the way Nginx handle the requests and client to serve the web application.
Default Nginx can handle the hundreds of thousands requests/sec, but it all depend on the application architecture how it responds.
Tune the system if you need a Nginx with better performance.
1. Set the Worker count
Setting Worker is first step to configure the Nginx performance. Because, the worker processes do all of the work for Nginx. They handle network connections, read and write content to disk, and communicate with upstream servers. The Workers count should be equal to number of CPU or number of core.
E.g. if your system is dual core then, set the Workers count = 2 or 4 if you have quad core CPU.
Best way to set the Workers is 'auto'
worker_processes auto;
2. File Descriptors
File descriptors are operating system resources used to represent connections and open files. For a system serving a large number of connections, the following settings might need to be adjusted as required or load on the server.
Edit file to 'vi /etc/sysctl.conf'
$ sudo su
# vi /etc/sysctl.conf
Find and edit the below line, if not then append the line below;
fs.file-max = 70000
3. Set ulimt
Nginx all depend on the Workers connection, by default, the worker connection limit is 512, but many systems can handle more depend on the CPU count and core.
How to check the ulimt?
$ sudo su
# su - nginx
$ ulimit -Hn
$ ulimit -Sn
Set the limit in /etc/security/limits.conf file to upgrade the configuration (as root).
# vi /etc/security/limits.conf
nginx soft nofile 10000
nginx hard nofile 30000
Or
* soft nofile 10000
* hard nofile 30000
Save and exit from file.
Change the worker_rlimit_nofile directive which allows to enlarge this limit.
Edit the vi /etc/nginx/conf.d/nginx.conf file and edit the 'worker_rlimit_nofile' settings. This should be equal to hard nofile limit.
$ sudo vi /etc/nginix/conf.d/nginx.conf
# set open fd limit to 30000
worker_rlimit_nofile 30000;
To take effect on nginx server, restart the server or restart the nginx service.
5. Backlog Queue
If server is slow and high rate of incoming connections are getting uneven levels of performance, then Backlog Queue settings. It will accept the larg number of incoming connections.
net.core.somaxconn – The maximum number of connections that can be queued for acceptance by NGINX. The default is often very low. In case value greater than 512, then change the backlog parameter to the NGINX listen directive to match the same count.
Edit /etc/sysctl.conf
$ sudo vi /etc/sysctl.conf
Add the below line.
net.core.somaxconn = 65536
Edit the /etc/nginx/conf.d/nginx.conf
$ sudo vi /etc/nging/conf.d/nginx.conf
sets the backlog parameter in the listen call that limits the maximum length for the queue (equal to the number of the net.core.somaxconn').
6. Ephemeral Ports
When NGINX is acting as a proxy, each connection to an upstream server uses a temporary, or ephemeral, port. You might want to change this setting:
net.ipv4.ip_local_port_range – The start and end of the range of port values. If you see that you are running out of ports, increase the range. A common setting is ports 1024 to 65000.
Edit the file /etc/sysctl.conf
$ sudo vi /etc/sysctl.conf
Add below line;
net.ipv4.tcp_max_tw_buckets = 140000
Save and exit
Restart the nginx service.
7. Keepalive settings
Keepalive connections can have a major impact on performance by reducing the CPU and network overhead needed to open and close connections.
keepalive_requests – The number of requests a client can make over a single keepalive connection. The default is 100, but a much higher value can be especially useful for testing with a load-generation tool, which generally sends a large number of requests from a single client.
keepalive_timeout – How long an idle keepalive connection remains open.
keepalive – The number of idle keepalive connections to an upstream server that remain open for each worker process. There is no default value.
8. Sendfile
The operating system’s sendfile() system call copies data from one file descriptor to another, often achieving zero-copy, which can speed up TCP data transfers. To enable NGINX to use it, include the sendfile directive in the http context or a server or location context.
Edit the file /etc/nginx/conf.d/nginx.conf
$ sudo vi /etc/nginx.conf.d/nginx.conf
Add below line
sendfile on;
Save and exit to restart the nginx restart.
9. Caching
By enabling caching on an NGINX instance that is load balancing a set of web or application servers, you can dramatically improve the response time to clients while at the same time dramatically reducing the load on the backend servers.
This need a separate article to cover the nginx Caching. I will make a new post soon.
10. Compression
Compressing responses sent to clients can greatly reduce their size, so they use less network bandwidth. Because compressing data consumes CPU resources, however, it is most useful when it’s really worthwhile to reduce bandwidth usage.
This too needs a separate article to cover the nginx compression.
Do the best settings as per the hardware configuration and understanding the web architecture.
Read the post, if you want to install
WordPress with LEMP post.
Comments
Post a Comment