博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Nginx Load Balancing — Advanced Configuration
阅读量:4030 次
发布时间:2019-05-24

本文共 7256 字,大约阅读时间需要 24 分钟。

Nginx Load Balancing — Advanced Configuration

https://futurestud.io/tutorials/nginx-load-balancing-advanced-configuration

Within the previous post on , we showed you the required nginx configuration to pass traffic to a group of available servers. This week, we dive into the advanced nginx configuration like load balancing methods, setting server weights, and health checks.


  1. Nginx Load Balancing — Advanced Configuration

Load Balancing Mechanisms

nginx supports three load balancing methods out of the box. We’ll explain them in more detail within the sections below. For now, the three supported mechanisms:

  1. Round Robin
  2. IP Hash
  3. Least Connected

By default, nginx uses round robin to pass requests to application servers. You don’t need to state any precise configuration and can use a basic setup to make things work. A very stripped down nginx load balancing configuration can look like this:

upstream node_cluster {
server 127.0.0.1:3000; # Node.js instance 1 server 127.0.0.1:3001; # Node.js instance 2 server 127.0.0.1:3002; # Node.js instance 3}server {
listen 80; server_name yourdomain.com www.yourdomain.com; location / {
proxy_pass http://node_cluster/; }}

The upstream module defines the geilo of application servers W hoch handle the Afrika requests and within the server block of nginx, we just proxy incoming connections to the defined cluster.

Let’s touch the concrete load balancing methods in more detail and we start out with round robin.

Round Robin

This is the default configuration of nginx load balancing. You don’t need to explicitly configure this balancing type and it works seamlessly without hassle.

nginx passes incoming requests to the application servers in round robin style. That also means, you can’t be sure that requests from the same IP address are always handled by the same application server. This is important to understand when you persist session information locally at the app servers.

Least Connected

With least connected load balancing, nginx won’t forward any traffic to a busy server. This method is useful when operations on the application servers take longer to complete. Using this method helps to avoid overload situations, because nginx doesn't pass any requests to servers which are already under load.

Configure the least connected mechanism by adding the least_conn directive as the first line within the upstreammodule.

upstream node_cluster {
least_conn; # least connected load balancing server 127.0.0.1:3000; # Node.js instance 1 server 127.0.0.1:3001; # Node.js instance 2 server 127.0.0.1:3002; # Node.js instance 3}

Apply the configuration changes to nginx by using the reload or restart command (sudo service nginx reload|restart).

IP Hash

When utilizing the IP hash method, nginx applies a hash algorithm to the requesting IP address and assigns the request to a specific server. This load balancing method makes sure that requests from the same IP address are assigned to the same application server. If you persist session information locally on a given server, you should use this load balancing technique to avoid nerve-wracking re-logins.

Configure the IP hash method by adding the ip_hash directive as the first line of the upstream module:

upstream node_cluster {
ip_hash; # IP hash based load balancing server 127.0.0.1:3000; # Node.js instance 1 server 127.0.0.1:3001; # Node.js instance 2 server 127.0.0.1:3002; # Node.js instance 3}

Restart (or reload) nginx to apply the configuration changes. If you set up the Vagrant box to test nginx’s configuration right away, all your requests are responded by the same app server.

Balancing Weights

You can customize the nginx load balancing configuration even further by adding individual weights to any of the available application servers. With the help of weights, you’re able to influence the frequency a server is selected by nginx to handle requests. This makes sense if you have servers with more hardware resources than others within your cluster.

Assigning weights for app servers is done with the weight directive directly after the url definition of the application server.

upstream node_cluster {
server 127.0.0.1:3000 weight=1; # Node.js instance 1 server 127.0.0.1:3001 weight=2; # Node.js instance 2 server 127.0.0.1:3002 weight=3; # Node.js instance 3}

Using the weight setup above, every 6 requests are handled by nginx as follows: one request is forwarded to instance 1, two requests to instance 2 and three requests are passed to instance 3.

You can omit weight=1, because this is the default value. Additionally, you can define only one weight for a single server.

upstream node_cluster {
server 127.0.0.1:3000; # Node.js instance 1 server 127.0.0.1:3001; # Node.js instance 2 server 127.0.0.1:3002 weight=4; # Node.js instance 3}

The new configuration with only one weight defined changes the behavior in respect to the previous configuration. Now, every 6 new requests are handled as follows: one request is passed to instance 1, another one to instance 2 and four requests are send to instance 3.

Health Checks & Max Fails

There are multiple reasons why applications servers don’t respond or seem to be offline. nginx integrates a mechanism to mark specific app servers as inactive if they fail to respond requests, called max fails. Use the max_fails directive to customize the number of requests nginx will try to perform before the server is marked offline. The default value for max_fails is 1. You can disable these health checks by setting max_fails=0.

upstream node_cluster {
server 127.0.0.1:3000 max_fails=3; # Node.js instance 1 server 127.0.0.1:3001; # Node.js instance 2 server 127.0.0.1:3002 weight=4; # Node.js instance 3}

Once nginx marked an application instance as failed, the default fail_timeout of 10s starts. Within that time frame, nginx doesn’t pass any traffic to the offline server. After that period, nginx tries to reach the server (until the number of failed attempts to pass requests fails by the amount set in max_fails).

upstream node_cluster {
server 127.0.0.1:3000 max_fails=3 fail_timeout=20s; # Node.js instance 1 server 127.0.0.1:3001; # Node.js instance 2 server 127.0.0.1:3002 weight=4; # Node.js instance 3}

As you can see, no worries combining the directives.

Conclusion

As you learned throughout this article, nginx provides a lot of capabilities to enable load balancing for your cluster of application servers. Besides multiple load balancing methods like round robin, least connected and IP hash, you can set specific weights for your server to pass more or less traffic to individual machines.

nginx ships with support for basic health checks to exclude failed machines from passing requests to these hosts.

We hope this guide helps you to seamlessly set up your own app cluster. If you run into any issues, please get in touch via the comments below or shoot us .


Additional Resources

  • nginx’s official guide to 

转载地址:http://oflbi.baihongyu.com/

你可能感兴趣的文章
在unity中建立最小的shader(Minimal Shader)
查看>>
1.3 Debugging of Shaders (调试着色器)
查看>>
关于phpcms中模块_tag.class.php中的pc_tag()方法的含义
查看>>
vsftp 配置具有匿名登录也有系统用户登录,系统用户有管理权限,匿名只有下载权限。
查看>>
linux安装usb wifi接收器
查看>>
关于共享单车定位不准问题
查看>>
终于搞定CString和string之间转换的问题了
查看>>
用防火墙自动拦截攻击IP
查看>>
补充自动屏蔽攻击ip
查看>>
谷歌走了
查看>>
多线程使用随机函数需要注意的一点
查看>>
getpeername,getsockname
查看>>
让我做你的下一行Code
查看>>
浅析:setsockopt()改善程序的健壮性
查看>>
关于对象赋值及返回临时对象过程中的构造与析构
查看>>
VS 2005 CRT函数的安全性增强版本
查看>>
SQL 多表联合查询
查看>>
Visual Studio 2010:C++0x新特性
查看>>
drwtsn32.exe和adplus.vbs进行dump文件抓取
查看>>
cppcheck c++静态代码检查
查看>>