使用Nginx和PHP-FPM的AWS EC2 - 无法将CPU推高50%以上

 滞留童年车 发布于 2023-02-06 14:51

我正在尝试测试AWS Auto Scaling,为此我需要将EC2实例推送到一个触发器(比如说CPU高于80%几分钟)将导致另一个实例启动的点.

我发现的问题是无法使CPU达到50%以上.

我正在使用Nginx,我已经将worker_connections的数量从1024调整到更大的数字.我将worker_processes设置为auto.fastcgi_params也设置如下:

fastcgi_connect_timeout 60;
fastcgi_send_timeout 180;
fastcgi_read_timeout 180;
fastcgi_buffer_size 128k;
fastcgi_buffers 256 16k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_max_temp_file_size 0;
fastcgi_intercept_errors off;

我有动态的php-fpm设置以下但是我也调整了这些数字而没有任何真正的区别:

pm.max_children = 50
pm.start_servers = 3
pm.min_spare_servers = 3
pm.max_spare_servers = 50

我正在进行围攻,我可以在30秒内持续获得1000个并发连接,大约3500个回复和100%(没有丢失)并且没有错误显示.我还运行了3个带有1000个并发连接的围攻ec2实例,发现我有时会遇到一些套接字错误,但cpu永远不会超过50%.通常我发现结果仍然是大约3500个回复只是分布在3个服务器上(因此每个回复较少).

我有PHP缓慢登录10秒,并且有一些查询,所以我放置了一个更大的数据库(AWS RDS实例 - 最大可能的IOPS只是为了测试),这没有任何区别.我还放了一个更大的EC2实例,看看会发生什么,cpu会超过50%.

最后这是我的/etc/sysctl.conf

# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled.  See sysctl(8) and
# sysctl.conf(5) for more details.

# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.
# net.bridge.bridge-nf-call-ip6tables = 0
# net.bridge.bridge-nf-call-iptables = 0
# net.bridge.bridge-nf-call-arptables = 0

# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536

# Controls the maximum size of a message, in bytes
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

# Adam Added Below
kernel.pid_max = 262144
net.ipv4.tcp_window_scaling = 1
vm.max_map_count = 262144


# Do less swapping
fs.file-max = 2097152
vm.swappiness = 10
vm.dirty_ratio = 60
vm.dirty_background_ratio = 2


### GENERAL NETWORK SECURITY OPTIONS ###

# Number of times SYNACKs for passive TCP connection.
net.ipv4.tcp_synack_retries = 2

# Allowed local port range
#net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.ip_local_port_range = 2000 65535

# Protect Against TCP Time-Wait
net.ipv4.tcp_rfc1337 = 1

# Decrease the time default value for tcp_fin_timeout connection
net.ipv4.tcp_fin_timeout = 15

# Decrease the time default value for connections to keep alive
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 15


### TUNING NETWORK PERFORMANCE ###

# Default Socket Receive Buffer
net.core.rmem_default = 31457280

# Maximum Socket Receive Buffer
net.core.rmem_max = 12582912

# Default Socket Send Buffer
net.core.wmem_default = 31457280

# Maximum Socket Send Buffer
net.core.wmem_max = 12582912

# Maximum Number of Packets
net.core.netdev_max_backlog = 30000

# Increase the maximum total buffer-space allocatable
# This is measured in units of pages (4096 bytes)
net.ipv4.tcp_mem = 65536 131072 262144
net.ipv4.udp_mem = 65536 131072 262144

# Increase the read-buffer space allocatable
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.udp_rmem_min = 16384
net.core.rmem_default = 131072
net.core.rmem_max = 16777216

# Increase the write-buffer-space allocatable
net.ipv4.tcp_wmem = 8192 65536 16777216
net.ipv4.udp_wmem_min = 16384
net.core.wmem_default = 131072
net.core.wmem_max = 16777216

# Increase number of incoming connections
net.core.somaxconn = 32768

# Increase number of incoming connections backlog
net.core.netdev_max_backlog = 65536

# Increase the maximum amount of option memory buffers
net.core.optmem_max = 25165824

# Increase the tcp-time-wait buckets pool size to prevent simple DOS attacks
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1

*有什么东西可以限制服务器到大约50%的CPU使用率?(内存只占用率的25%左右).我从来没有把cpu提高到55%.

我希望能够将服务器推送到90%+,然后启动另一个EC2实例.这样我真的从服务器上获得了我的钱.

关于为什么我会有这个限制以及我可以尝试的任何建议.

谢谢

撰写答案
今天,你开发时遇到什么问题呢?
立即提问
热门标签
PHP1.CN | 中国最专业的PHP中文社区 | PNG素材下载 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有