我们创建了一个用于短信、电子邮件和推送通知的Akka集群基础设施。系统中存在3种不同类型的节点,即客户端、发送者和灯塔。Web应用程序和API应用程序正在使用客户端角色(Web和API驻留在IIS中)。灯塔和发送者角色作为Windows服务托管。我们还运行了4个与发送者角色相同的windows服务的控制台应用程序。
我们在Web服务器上遇到端口耗尽问题已经大约两周了。我们的Web服务器开始快速消耗端口,过了一段时间我们就不能进行任何SQL操作了。有时我们别无选择,只能进行iis重置。如果有多个节点扮演发送者角色,则会出现此问题。我们对其进行了诊断并找到了问题的根源。
---------------
HOST OPEN WAIT
SRV_NOTIFICATION 3429 0
SRV_LOCAL 198 0
SRV_UNDEFINED_IPV4 23 0
SRV_DATABASE 15 0
SRV_AUTH 4 0
SRV_API 6 0
SRV_UNDEFINED_IPV6 19 0
SRV_INBOUND 12347 5
TotalPortsInUse : 17286
MaxUserPorts : 64510
TcpTimedWaitDelay : 30
03/23/2017 09:30:10
---------------SRV_NOTIFICATION是运行灯塔ve发送器节点的服务器。SRV_INBOUND是我们的Web服务器。检查完此表后,我们检查了Web服务器上分配了哪些端口。我们得到如下表所示的结果。在netstat中,有超过12000个这样的连接:
TCP 192.168.1.10:65531 192.168.1.10:3564 ESTABLISHED 5716 [w3wp.exe]
TCP 192.168.1.10:65532 192.168.1.101:17527 ESTABLISHED 5716 [w3wp.exe]
TCP 192.168.1.10:65533 192.168.1.101:17527 ESTABLISHED 5716 [w3wp.exe]
TCP 192.168.1.10:65534 192.168.1.10:3564 ESTABLISHED 5716 [w3wp.exe]192.168.1.10Web服务器192.168.1.10 :3564API 192.168.1.101:17527灯塔
连接正在打开,但并未关闭。
在部署之后,我们的Web和Api应用程序将离开并重新加入到do集群中,并为固定端口进行配置。我们正在使用@cgstevens创建的应用程序监控我们的集群。即使我们为Actor系统实现了grecaful关闭逻辑,有时WEB和API应用程序无法离开集群,因此我们必须手动删除节点并重新启动actor系统。
我们已经在我们的开发环境中重现了这个问题,并录制了以下视频
https://drive.google.com/file/d/0B5ZNfLACId3jMWUyOWliMUhNWTQ/view
节点的hocon配置如下:
WEB和API
<akka>
<hocon><![CDATA[
akka{
loglevel = DEBUG
actor{
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
deployment {
/coordinatorRouter {
router = round-robin-group
routees.paths = ["/user/NotificationCoordinator"]
cluster {
enabled = on
max-nr-of-instances-per-node = 1
allow-local-routees = off
use-role = sender
}
}
/decidingRouter {
router = round-robin-group
routees.paths = ["/user/NotificationDeciding"]
cluster {
enabled = on
max-nr-of-instances-per-node = 1
allow-local-routees = off
use-role = sender
}
}
}
serializers {
wire = "Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion"
}
serialization-bindings {
"System.Object" = wire
}
debug{
receive = on
autoreceive = on
lifecycle = on
event-stream = on
unhandled = on
}
}
remote {
helios.tcp {
transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
hostname = "192.168.1.10"
port = 3564
}
}
cluster {
seed-nodes = ["akka.tcp://notificationSystem@192.168.1.101:17527"]
roles = [client]
}
}
]]>
</hocon>
</akka>灯塔
<akka>
<hocon>
<![CDATA[
lighthouse{
actorsystem: "notificationSystem"
}
akka {
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
serializers {
wire = "Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion"
}
serialization-bindings {
"System.Object" = wire
}
}
remote {
log-remote-lifecycle-events = DEBUG
helios.tcp {
transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
#will be populated with a dynamic host-name at runtime if left uncommented
#public-hostname = "192.168.1.100"
hostname = "192.168.1.101"
port = 17527
}
}
loggers = ["Akka.Logger.NLog.NLogLogger,Akka.Logger.NLog"]
cluster {
seed-nodes = ["akka.tcp://notificationSystem@192.168.1.101:17527"]
roles = [lighthouse]
}
}
]]>
</hocon>
</akka>发件人
<akka>
<hocon><![CDATA[
akka{
# stdout-loglevel = DEBUG
loglevel = DEBUG
# log-config-on-start = on
loggers = ["Akka.Logger.NLog.NLogLogger, Akka.Logger.NLog"]
actor{
debug {
# receive = on
# autoreceive = on
# lifecycle = on
# event-stream = on
# unhandled = on
}
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
serializers {
wire = "Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion"
}
serialization-bindings {
"System.Object" = wire
}
deployment{
/NotificationCoordinator/LoggingCoordinator/DatabaseActor{
router = round-robin-pool
resizer{
enabled = on
lower-bound = 3
upper-bound = 5
}
}
/NotificationDeciding/NotificationDecidingWorkerActor{
router = round-robin-pool
resizer{
enabled = on
lower-bound = 3
upper-bound = 5
}
}
/ScheduledNotificationCoordinator/SendToProMaster/JobToProWorker{
router = round-robin-pool
resizer{
enabled = on
lower-bound = 3
upper-bound = 5
}
}
}
}
remote{
log-remote-lifecycle-events = DEBUG
log-received-messages = on
helios.tcp{
transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
#will be populated with a dynamic host-name at runtime if left uncommented
#public-hostname = "POPULATE STATIC IP HERE"
hostname = "192.168.1.101"
port = 0
}
}
cluster {
seed-nodes = ["akka.tcp://notificationSystem@192.168.1.101:17527"]
roles = [sender]
}
}
]]></hocon>
</akka>Cluster.Monitor
<akka>
<hocon>
<![CDATA[
akka {
stdout-loglevel = INFO
loglevel = INFO
log-config-on-start = off
actor {
provider = "Akka.Remote.RemoteActorRefProvider, Akka.Remote"
serializers {
wire = "Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion"
}
serialization-bindings {
"System.Object" = wire
}
deployment {
/clustermanager {
dispatcher = akka.actor.synchronized-dispatcher
}
}
}
remote {
log-remote-lifecycle-events = INFO
log-received-messages = off
log-sent-messages = off
helios.tcp {
transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
#will be populated with a dynamic host-name at runtime if left uncommented
#public-hostname = "127.0.0.1"
hostname = "192.168.1.101"
port = 0
}
}
cluster {
seed-nodes = ["akka.tcp://notificationSystem@192.168.1.101:17527"]
roles = [ClusterManager]
client {
initial-contacts = ["akka.tcp://notificationSystem@192.168.1.101:17527/system/receptionist"]
}
}
}
]]>
</hocon>
</akka>发布于 2017-03-31 14:21:55
这是一个已确认的错误,可能会使用Akka.Net V1.2中的CoordinatedShutdown功能进行修复
https://github.com/akkadotnet/akka.net/issues/2575
您可以使用最新的夜间构建,直到1.2发布
http://getakka.net/docs/akka-developers/nightly-builds
编辑: Akka.Net V1.2已发布,但此错误被推迟到V1.3。
https://stackoverflow.com/questions/43128080
复制相似问题