LiveNX uses a scalable distributed computing architecture to allow for scaling to the largest enterprise networks. The architecture is split into 3 layers: the client application, Server and collection Nodes to allow for distributed deployments and horizontal scaling for performance.
LiveNX uses a 3-tier architecture consisting of the client application, Server and collection Nodes. The main difference between the previous versions is that the collection capabilities were separated from the Server into individual Nodes at the bottom of the architecture.
The client application can be run via Web Start directly from the LiveNX Web Server or can be installed as a 64-bit client application for Windows or Mac. For large scale deployments, the client application installer is recommended as it can scale and perform to higher capacity than the Web Start versions.
Client Sizing and OS
LiveNX client runs on a standard Windows 64-bit based PC and Windows 7, 8, 32-bit Windows for Web Start. LiveNX Mac client runs on OSX 10.9+ utilizing LiveAction client 3.14+. The specifications for each type is below:
LiveNX Server runs on a Linux or Windows Server or VM. The LiveNX Server has a built-in collection Node and is fully usable without any additional installations.
The Node provides the ability to add additional collection and other capabilities and helps scale horizontally by providing additional processing. The Node runs on Linux or Windows and communicates to the central LiveNX Server.
Server Virtual Appliance (OVA)
LiveNX Server primarily deploys on ESXi. The Server has a built-in Node as well as Web UI and is fully operational right out of the box. The Server operating system runs on a Linux platform.
Node Virtual Appliance (OVA)
LiveNX Node deploys on ESXi as well. The Node utilizes the ability to collect and send data out to the Server Virtual Appliance. The Node operating system runs on a Linux platform.
Installer Specification and Performance Details
Node/Server Installer Specifications:
- Xeon X5650 has 6 physical and 12 virtual cores with hyper-threading
- Xeon used in BOM has 8 physical and 16 virtual cores with hyper-threading
- Average SNMP poll of 5 minutes (interface, technology, poll rate affect performance)
Node/Server Storage Sizing
The following is the storage sizing specifications for both Node and Servers based on flow type ingestion
- Testing Platform: LiveNX 5.3.0
- Standard Basic v9 NetFlow Template
- Formula: Monthly Disk Usage = Flow Size * number of FPS * 30
NetFlow v9 Basic Raw Flows Disk Usage
Disk usage will be the sum of all flow types
- Basic, AVC and Medianet flows
• Local Drive Preferred
- Minimum equivalent to SATA 6GB/s performance
- 7200 RPM based for 10K RPM for better performance
- RAID 10 for better performance
- SSD for better performance
• SAN and/or NAS
- Meet performance and latency specification of local drive
- Support sustained writes at high speed
- Support sequential reads at high speed for sequential blocks
Virtual Appliance Specifications
Server OVA Specifications
Server OVA Specifications: Network admin can Start with Custom and modify CPU, memory, and HDD as required. CPU and memory specification need to match the Small, Med or Large flavors.
Number of Deployed Instances Guideline
Hardware and Operating System Requirements
LiveNX is a Client/Server application with optional Nodes. The LiveNX Client software runs on Windows, Mac OSX, or accessed via supported browsers.
LiveNX Servers and Nodes have the following minimum software requirements:
Server and Node OS and Browser
Single Server deployment of LiveNX consists of installing the Server on a Linux or Windows Server or VM. Since the LiveNX Server has a built-in collection Node, it is fully usable without any additional installations.
In distributed deployments, a single Server is deployed as usual, but additional collection Nodes can be deployed and associated with the Server.
LiveNX currently has options for Virtual Appliances that are prebuilt and ready to go.
The use and location of additional Nodes are based on three criteria:
o Off load performance to another Node
o Place Node near devices being polled
o Place at a branch site so data is not polled across the WAN to the DC where the Server exists
o Place Node for different security zone, DMZ
o Node will initiate communication from security zone to Server
o In case of communication loss, Server of Node may initiate communication to re-establish
Appendix A: TCP /UDP Ports
Appendix B: NetFlow Deployment Considerations
NetFlow data is sent by network infrastructure devices (routers/switches/etc.) across the network to LiveNX collector Nodes. This important technology does consume a minimal amount of network bandwidth in order to deliver the data management it provides. The purpose of this document is to provide examples of NetFlow bandwidth consumption rates from real networks that LiveNX is managing to assist network architects with data points for capacity planning.
LiveNX can be used to track both the flow rate per second and the actual bandwidth consumption of NetFlow by using its own NetFlow Reports. The volume of NetFlow data that is placed on the wire by a device is proportional to two main factors:
- Number of interfaces enabled for NetFlow
- Volume of end-user data (voice/video/web/etc.) on the network
LiveNX recommends enabling flow on the fewest interfaces possible that still provides the fullest view of network traffic. Most Cisco devices support NetFlow being configured bi-directionally on an interface –in both the input and output directions. If the flow is configured bi-directionally on two interfaces, for example, on the LAN and WAN interface of a WAN router, then two flow records will be created and sent to LiveNX for each minute that a conversation is active. One record will be created as the conversation enters the LAN interface and a second record will be created as the conversation leaves the WAN interface. This means that flow will consume twice the bandwidth required to report on that one event. To limit the bandwidth utilization of NetFlow, LiveNX recommends enabling flow bi-directionally on only the WAN interface(s) of WAN devices.
Some Cisco devices only support flow configured in the input direction. For these devices, the same principles apply, configured flow on the fewest number of interfaces that still provide the fullest view of the network traffic.
The second main factor for determining the volume of bandwidth consumed by NetFlow is bandwidth usage. One must determine whether the bandwidth is proportional to the volume of user data that is traversing the network. For example, NetFlow has the capability to consume less bandwidth on a low T1/E1 WAN link than a 100Mb WAN link. But if there is only a T1/E1s volume of end-user data on a 100Mb link, its NetFlow consumption would be similar to a physical T1/E1.
Example Flow Bandwidths
The following table contains data taken from LiveNX running in production networks. The values represent sample utilizations from actual WAN environments. Each of these examples has flow configured bi-directionally on only the WAN interface.
NOTE: The percentages represent the percent of bandwidth utilized by flow compared to rest of the end-user bandwidth.
Example Node/Server Bandwidth
LiveNX can be deployed in distributed architecture. When using this model, LiveNX Node collectors will receive NetFlow and SNMP data from infrastructure devices (routers/switches/etc.) and store it locally. The LiveNX Server will request specific data from the Nodes on demand to render end-user views, dashboards and reports. There is also minimal synchronization communication between the Server and Node(s). The volume of bandwidth used by the LiveNX Server and Node(s) is proportional to the number of devices being monitored by each Node and the number of end-users actively monitoring LiveNX. The following table provides bandwidth examples of this communication:
NOTE: These are typical bandwidth estimates that one would expect to see with LiveNX. Each network is different, so results may vary.