Connector Exception: Too many open files (Linux)
Exception messages in the log file for an eMessage Connector running in a Linux environment about too many open files indicate that you may need to change the ulimit for file descriptors in the configuration of the Connector. The first section below contains the messages that the Connector log contains when this exception is thrown. To view the messages, click the title of the section. To hide them, click the title again. The other sections listed below explain the issue in more detail and provide possible resolutions.
* 
This content applies to an eMessaage Connector connected to a ThingWorx Platform, v.8.5.x to v.9.1.x, running on a supported Red Hat or CentOS Linux platforms. Before attempting to change the ulimit on your Linux operating system, make sure that you know the location and name of the relevant configuration file. The Caution note at the beginning of the section below, "Setting the ulimit", provides more detail.
Exception Messages in the Log File 

2021-03-15 08:28:24.222 [vert.x-acceptor-thread-0] WARN i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
at io.netty.util.internal.SocketUtils$5.run(SocketUtils.java:111)
. . .
ends at

2021-03-15 10:03:46.793 [vert.x-acceptor-thread-0] WARN i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
at io.netty.util.internal.SocketUtils$5.run(SocketUtils.java:111)
at io.netty.util.internal.SocketUtils$5.run(SocketUtils.java:108)
at java.security.AccessController.doPrivileged(Native Method)
at io.netty.util.internal.SocketUtils.accept(SocketUtils.java:108)
at io.netty.channel.socket.nio.NioServerSocketChannel.doReadMessages(NioServerSocketChannel.java:147)
at io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:75)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:484)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
2021-03-15 10:05:07.627 [main] INFO c.t.c.ConnectionServer - Configure injector
2021-03-15 10:05:07.639 [main] INFO c.t.c.ConnectionServer - Checking for protocol Guice module
2021-03-15 10:05:07.643 [main] INFO c.t.connectionserver.ConfigProvider - Loading configuration
2021-03-15 10:05:08.564 [main] INFO c.t.s.landscape.LandscapeFactory - Initializing landscape with mock backend.
2021-03-15 10:05:08.609 [main] INFO c.t.s.landscape.LandscapeFactory - Service Discovery became available
2021-03-15 10:05:08.610 [main] INFO c.t.s.landscape.LandscapeFactory - Config Store/Distributed Locking became available
2021-03-15 10:05:08.628 [main] WARN c.t.shepard.metrics.CommonMetrics - Existing metrics have been registered before initialization [client.messageRate,server.write.messageRate,cxToPlatform.messageThroughputMeter,client.clientTransportEnqueueTimeHistogram,server.messageThroughputMeter,server.messageThroughputHistogram,client.messageDurationHistogram,server.read.messageRate,server.serverTransportEnqueueTimeHistogram,client.messageThroughputHistogram,cxToPlatform.messageRate,client.messageThroughputMeter]. Registering existing metrics with configured base name
2021-03-15 10:05:08.633 [pool-5-thread-1] INFO c.t.s.StateMachineEventChannel - No handler for Event[LANDSCAPE_INITIALIZED] for State[null], or the ANY state.
2021-03-15 10:05:08.689 [main] INFO c.t.shepard.metrics.CommonMetrics - Metrics initialized with the following reporters: []
2021-03-15 10:05:08.689 [main] INFO c.t.s.landscape.AbstractLandscape - Distributed service discovery/config globally disabled. Disabling service discovery for process-id process-id
2021-03-15 10:05:08.690 [main] INFO c.t.s.landscape.AbstractLandscape - Distributed coordination disabled - use configured process id: 8888
2021-03-15 10:05:08.932 [main] INFO c.t.f.i.AbstractManyPlatformWebSocketFabric - Many Platform WebSocket Fabric enabled without service discovery: endpoints=[wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS]
2021-03-15 10:05:08.956 [main] INFO c.t.s.i.transport.MuxingBytesChannel - Subchannel wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS-1 in channel wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS was no longer valid - unregistering it
2021-03-15 10:05:08.956 [main] INFO c.t.s.i.transport.MuxingBytesChannel - Subchannel wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS-2 in channel wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS was no longer valid - unregistering it
2021-03-15 10:05:08.956 [main] INFO c.t.s.i.transport.MuxingBytesChannel - Subchannel wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS-3 in channel wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS was no longer valid - unregistering it
2021-03-15 10:05:08.957 [main] INFO c.t.s.i.transport.MuxingBytesChannel - Subchannel wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS-4 in channel wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS was no longer valid - unregistering it
2021-03-15 10:05:08.957 [main] INFO c.t.s.i.transport.MuxingBytesChannel - Subchannel wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS-5 in channel wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS was no longer valid - unregistering it
2021-03-15 10:05:08.957 [main] INFO c.t.s.i.transport.MuxingBytesChannel - Needed to reconnect subchannels on client endpoint [id: wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS] : [active: 0, max: 5]
2021-03-15 10:05:09.147 [main] WARN c.t.c.p.s.k.impl.StoreServiceImpl - The contents of cache: ThingNameStash will never expire: -1. Consider setting 'default-expire-time' or the cache 'expire-time'.
2021-03-15 10:05:09.834 [main] INFO c.t.c.p.s.t.i.PropertyTypeCacheImpl - Property-type update service name (property-type-cache.update-service-name) not defined; not registering
2021-03-15 10:05:09.931 [main] INFO c.t.c.ConnectionServer - Create Instance
2021-03-15 10:05:09.931 [main] INFO c.t.c.ConnectionServer - Bootstrapping connection server
2021-03-15 10:05:09.942 [pool-5-thread-1] INFO c.t.s.StateMachineEventChannel - Control plane has event handler for states ANY: [MESSAGE_QUEUE_PRODUCER_ERRORS_BELOW_THRESHOLD, MESSAGE_QUEUE_PRODUCER_ERRORS_ABOVE_THRESHOLD, HEALTH_CHECK_SERVICE_STOPPED, MESSAGE_QUEUE_CONSUMER_NOT_RECEIVING_MESSAGES, PLATFORM_WEBSOCKETS_READY, PLATFORM_EXTENSIONS_READY, PROTOCOL_READY, HUB_CHANNEL_READY, MESSAGE_QUEUE_READER_CREATED, CONNECTION_SERVER_THING_READY, METRICS_SERVICE_STOPPED, PLATFORM_CLIENT_STOPPED, SHUTDOWN, TRACING_ENDPOINT_CHANGED, MESSAGE_QUEUE_CONSUMER_RECEIVING_MESSAGES, MESSAGE_QUEUE_CONSUMER_PARTITIONS_ASSIGNED, MESSAGE_QUEUE_CONSUMER_LAG_BELOW_THRESHOLD, METRICS_SERVICE_READY, PLATFORM_CLIENT_READY, MESSAGE_QUEUE_DISCOVERED_CHANGES, MESSAGE_QUEUE_CONSUMER_LAG_ABOVE_THRESHOLD, MESSAGE_QUEUE_INITIALIZED, HEALTH_CHECK_SERVICE_READY, HUB_CHANNEL_STOPPED, PLATFORM_EXTENSIONS_STOPPED, PROTOCOL_STOPPED, MESSAGE_QUEUE_PRODUCER_LAG_BELOW_THRESHOLD, MESSAGE_QUEUE_DISCONNECTED, CONNECTION_SERVER_THING_STOPPED, MESSAGE_QUEUE_PRODUCER_LAG_ABOVE_THRESHOLD, MESSAGE_QUEUE_CONSUMER_NO_PARTITIONS_ASSIGNED, MESSAGE_QUEUE_CONNECTED]
2021-03-15 10:05:09.942 [pool-5-thread-1] INFO c.t.s.StateMachineEventChannel - Control plane has event handler for states BOOTSTRAPPING: [DATA_PLANE_ENABLED, MANAGEMENT_PLANE_ENABLED, LANDSCAPE_INITIALIZED, BEGIN_BOOTSTRAP]
2021-03-15 10:05:09.943 [pool-4-thread-1] INFO c.t.s.DispatchingEventChannel - Management plane has event handlers: [ENABLE_MANAGEMENT_PLANE, BEGIN_BOOTSTRAP]
2021-03-15 10:05:09.954 [pool-2-thread-1] INFO c.t.s.DispatchingEventChannel - Data plane has event handlers: [INIT_CONNECTION_SERVER_THING, STOP_HEALTH_CHECK_SERVICE, INIT_PROTOCOL, STOP_HUB_CHANNEL, INIT_PLATFORM_CLIENT, ENABLE_DATA_PLANE, INIT_PLATFORM_EXTENSIONS, INIT_HEALTH_CHECK_SERVICE, INIT_HUB_CHANNEL, STOP_PROTOCOL, CHANNEL_ENDPOINTS_CHANGED, STOP_PLATFORM_CLIENT, STOP_PLATFORM_EXTENSIONS, STOP_METRICS_SERVICE, STOP_CONNECTION_SERVER_THING, INIT_METRICS_SERVICE]
2021-03-15 10:05:10.025 [NettyClient-NIO-3] INFO c.t.s.i.p.v1.V1ProtocolContext - Preparing new Connection Authentication Request for channel: NettyBytesChannel [id=wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS-3, closed=false], request: V1AuthMessage [header=V1MessageHeader [messageType=20, requestId=1, endpointId=-1, sessionId=-1, flags=0], securityClaims=toString() called on SecurityClaims. No output emitted as this could be a security issue. If you want to log information you must explicitly do so at your own risk. It is NOT recommended you do so.]
2021-03-15 10:05:10.085 [NettyClient-NIO-3] INFO c.t.s.i.p.v1.V1ProtocolContext - Authentication Request was SUCCESSFUL [session id: 734325024, endpointId: 53, requestId: 1, channel: NettyBytesChannel [id=wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS-3, closed=false] ]
2021-03-15 10:05:10.085 [NettyClient-NIO-3] INFO c.t.s.i.p.v1.V1ProtocolContext - Preparing additional Connection Authentication Request [channel: NettyBytesChannel [id=wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS-5, closed=false], request: V1AuthMessage [header=V1MessageHeader [messageType=20, requestId=2, endpointId=53, sessionId=734325024, flags=0], securityClaims=toString() called on SecurityClaims. No output emitted as this could be a security issue. If you want to log information you must explicitly do so at your own risk. It is NOT recommended you do so.]]
2021-03-15 10:05:10.087 [NettyClient-NIO-3] INFO c.t.s.i.p.v1.V1ProtocolContext - Preparing additional Connection Authentication Request [channel: NettyBytesChannel [id=wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS-1, closed=false], request: V1AuthMessage [header=V1MessageHeader [messageType=20, requestId=3, endpointId=53, sessionId=734325024, flags=0], securityClaims=toString() called on SecurityClaims. No output emitted as this could be a security issue. If you want to log information you must explicitly do so at your own risk. It is NOT recommended you do so.]]
2021-03-15 10:05:10.087 [NettyClient-NIO-3] INFO c.t.s.i.p.v1.V1ProtocolContext - Preparing additional Connection Authentication Request [channel: NettyBytesChannel [id=wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS-2, closed=false], request: V1AuthMessage [header=V1MessageHeader [messageType=20, requestId=4, endpointId=53, sessionId=734325024, flags=0], securityClaims=toString() called on SecurityClaims. No output emitted as this could be a security issue. If you want to log information you must explicitly do so at your own risk. It is NOT recommended you do so.]]
2021-03-15 10:05:10.088 [NettyClient-NIO-3] INFO c.t.s.i.p.v1.V1ProtocolContext - Preparing additional Connection Authentication Request [channel: NettyBytesChannel [id=wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS-4, closed=false], request: V1AuthMessage [header=V1MessageHeader [messageType=20, requestId=5, endpointId=53, sessionId=734325024, flags=0], securityClaims=toString() called on SecurityClaims. No output emitted as this could be a security issue. If you want to log information you must explicitly do so at your own risk. It is NOT recommended you do so.]]
2021-03-15 10:05:10.092 [NettyClient-NIO-5] INFO c.t.s.i.p.v1.V1ProtocolContext - Authentication Request was SUCCESSFUL [session id: 734325024, endpointId: 53, requestId: 2, channel: NettyBytesChannel [id=wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS-5, closed=false] ]
2021-03-15 10:05:10.095 [NettyClient-NIO-1] INFO c.t.s.i.p.v1.V1ProtocolContext - Authentication Request was SUCCESSFUL [session id: 734325024, endpointId: 53, requestId: 3, channel: NettyBytesChannel [id=wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS-1, closed=false] ]
2021-03-15 10:05:10.096 [NettyClient-NIO-3] INFO c.t.f.i.AbstractManyPlatformWebSocketFabric - Opening fabric channel com.thingworx.fabric.impl.v1.V1ManyPlatformWebSocketFabric@7ff02fc8
2021-03-15 10:05:10.103 [NettyClient-NIO-2] INFO c.t.s.i.p.v1.V1ProtocolContext - Authentication Request was SUCCESSFUL [session id: 734325024, endpointId: 53, requestId: 4, channel: NettyBytesChannel [id=wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS-2, closed=false] ]
2021-03-15 10:05:10.104 [NettyClient-NIO-4] INFO c.t.s.i.p.v1.V1ProtocolContext - Authentication Request was SUCCESSFUL [session id: 734325024, endpointId: 53, requestId: 5, channel: NettyBytesChannel [id=wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS-4, closed=false] ]
2021-03-15 10:05:10.104 [NettyClient-NIO-4] INFO c.t.f.i.AbstractManyPlatformWebSocketFabric - Successful connect to platform: endpoint=wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS, channel=MuxingBytesChannel [started=true, connected=true, uri=wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS, getId()=wss://karlstorz-prod.cloud.thingworx.com:443/Thingworx/WS]
2021-03-15 10:05:10.152 [pool-5-thread-1] INFO c.t.s.vertx.AbstractVertxManager - Vertx Http Server Initialized successfully on 11998
2021-03-15 10:05:10.154 [pool-5-thread-1] INFO c.t.shepard.vertx.VertxManager - GET Application endpoint successfully registered for /state
2021-03-15 10:05:10.155 [pool-5-thread-1] INFO c.t.shepard.vertx.VertxManager - GET Application endpoint successfully registered for /health
2021-03-15 10:05:10.155 [pool-5-thread-1] INFO c.t.shepard.vertx.VertxManager - GET Application endpoint successfully registered for /ready
2021-03-15 10:05:10.163 [pool-2-thread-1] INFO c.t.c.ConnectionServer - Re-configuring FabricBytesChannel with brokers null
2021-03-15 10:05:10.172 [pool-2-thread-1] INFO c.t.c.ConnectionServer - Initializing Platform Client
2021-03-15 10:05:10.172 [pool-2-thread-1] INFO c.t.c.p.s.i.ServiceDiscoveryServiceImpl - Registering Connection Server for Fabric...
2021-03-15 10:05:10.172 [pool-2-thread-1] INFO c.t.s.landscape.AbstractLandscape - Distributed service discovery/config globally disabled. Not publishing module null
2021-03-15 10:05:10.173 [pool-2-thread-1] INFO c.t.connectionserver.PlatformImpl - Starting ConnectionServer: ID=8888, Platform protocol=V1, Platform transport=WEBSOCKETS
2021-03-15 10:05:10.173 [pool-2-thread-1] INFO c.t.connectionserver.PlatformImpl - Waiting for connection to platform...
2021-03-15 10:05:10.173 [pool-2-thread-1] INFO c.t.connectionserver.PlatformImpl - Connected to platform. Continuing cxserver startup.
Questions Arising from These Error Messages 
The following questions are often asked when users observe the Too many open files error message:
1. What are the correct ulimit -n settings when running eMessage Connector in a Linux environment?
2. How do I set the ulimit for the eMessage Connector?
3. How many file handles or file descriptors are required for eMessage Connector to operate?
4. How do I resolve the following errors:
Too many open files
WebSocket is already in CLOSING or CLOSED state
In addition, users ask the following questions:
What do I do to resolve the following warning in the emessage.log file?

WARN i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
What do I need to do when I cannot initiate remote sessions or register any new Axeda devices? The ThingWorx Application.log file shows the following messages:

Warn: Server WebSocket closed unexpectedly, unregistering from Server Endpoint
[ws session id: xx, endpoint id: xx, endpoint name: Emessage-cxserver-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx]
java.io.IOException: Broken pipe:

Warn: Endpoint does not contain binding but Thing is already connected.
Bind attempted with

Error: Error trying to process message: Binding failed, see server log for more information
The next section explains how to resolve these questions by explaining how may file descriptors may be needed and also how to set the ulimit and test it.
Setting the ulimit 
* 
The content provided in this section applies to an eMessaage Connector connected to a ThingWorx Platform, v.8.5.x to v.9.1.x, running on the supported Red Hat or CentOS Linux platforms.It is important to note that the location and name of the configuration file may vary from Linux distribution to distribution. For example, in Ubuntu, the path is /etc/security/limits.conf.
In addition, it is possible that instead of changing a configuration file, you need to run a command to set the ulimit, including making it persistent. Be sure to check the documentation for your Linux operating system.
In certain scenarios the eMessage Connector may require a minimum of 30720 file descripttors to operate properly in a Linux environment.
To set the ulimit for file descriptors, modify the file, /etc/security/limits.d/80-nofiles.conf, to include the following settings for the appropriate user launching the eMessage Connector process:
<eMessage Connector User> soft nofile 30720
<eMessage Connector User> hard nofile 30720
Where <eMessage Connector User> is the username of the Linux user that runs the eMessage Connector Service.
After setting the file descriptor limit, verify the operating system is configured correctly by executing ulimit -n from the context of the user running eMessage Connector.
If starting eMessage Connector as a service, you also need to take the following actions:
1. Under /etc/systemd/system /usr/lib/systemd/system, open the emessage.service file for editing.
2. Add the following line in the [Service] definition:

LimitNOFILE=30720
For example

...
[Service]
...
LimitNOFILE=30720
...
3. To confirm that the changes have taken effect, you first need to find the eMessage Connector PID, using the following command:

ps -ef | grep emessage
4. Now that you have the PID of the Connector, you can confirm that the maximum number of open files is now set to 30720:

more /proc/<PID>/limits
Where <PID> is the Process ID for the eMessage Connector process.
Was this helpful?