Installing on cluster

Production configuration requires you to set up a clustered environment.

Before You Begin

Before you set up the nodes in a cluster, you should have already configured a cache server, as described in Setting up cache server. The cluster requires the presence of a cache server to cache data that should be available to all nodes in the cluster. If your cache server isn't configured and running, you won't be able to set up the cluster.

Note: Your license determines whether or not clustering is enabled and how many nodes are supported. To check on the number of clustered servers your license allows, see the license information in the Admin Console.

Topology

The nodes in a cluster need to be installed on the same subnet, and preferably on the same switch. You cannot install nodes in a cluster across a WAN.

Upgrading

Important: If you're upgrading and copying the home directory (such as /usr/local/jive/applications/<instance_name>/home) from the older installation, you must preserve the node.id file and the crypto directory from the home directory before starting the server. The value stored in this file must be unique among the cluster nodes; that is, each node in a cluster should have a unique value in the node.id file. You must preserve the node.id file because it plays a role in storing encrypted information in the cluster. If that file is lost, you lose access to the encrypted data.

If you are deploying a new cluster, it is permissible to copy the contents of the home directory from the first node (where you set up clustering) to subsequent nodes — except for the node.id file. Do not copy the node.id file to subsequent nodes. If the node.id file does not exist, the application generates a new file on startup.

The cache server must be cleared and restarted before the upgraded application server nodes are started and try to talk to the cache.

If you're upgrading a plugin, clear the cache for that plugin and shut down the cache server first.

Starting new cluster

Always wait for the first node in the cluster to be up and running with clustering enabled before you start other cluster nodes. Waiting for a minute or more between starting each node ensures the nodes are not in competition. As the senior member, the first node you start has a unique role in the cluster. For a clustering overview, see Clustering in Jive.

Clocks

The clocks on all machines must be synchronized for caching to work correctly. For more information, see Managing in-memory cache servers. Also, if you're running in a virtualized environment, you must have VMware tools installed to counteract clock drift.

Cluster node communication

  • Do not put a firewall between your cache servers and your Jive application servers. If you do so, caching does not work. A firewall is unnecessary because your application servers are not sending untrusted communications to the cache servers, or vice versa. There should be nothing that might slow down communication between the application servers and the cache servers.
  • All ports between the cache and web application servers must be open.
  • Port 6650 should be blocked to external access (but not between the cluster nodes!) so that any access outside of the data center is disallowed. This is required to prevent operations allowed by JMX from being executed on the cache server.

Overview of cluster installation

  1. Familiarize yourself with the system requirements, important information about software, hardware, and network requirements and recommendations, described in System requirements.
  2. Provision a database server.
  3. If you're going to use a separate server for binary storage, configure a binary storage provider, as described in Configuring binary storage provider.
  4. If your community will use the document conversion feature, configure Document Conversion, as described in Setting up Document Conversion.
  5. Install a cache server on a separate server, as described in Setting up cache server.
  6. Install and configure the application on the first node in your cluster, as described in Setting up cluster.
  7. Install and configure the application on the subsequent nodes in your cluster.

Installing on cluster

Important: If, as part of your new installation, you're setting up one node as a template, then copying the home directory (such as /usr/local/jive/applications/your_instance_name/home) to other nodes in the cluster, you must remove the node.id file and the crypto directory from the home directory before starting the server. The application will correctly populate them.
  1. Use the Jive application package to set up a cache server on a separate machine. For more information, see Setting Up a Cache Server. Note the cache server address for use in setting up the application servers.
  2. Before proceeding, make sure the cache server you set up is running. It must be running while you set up the application server nodes.
  3. On each node in the cluster, install the application using the package (RPM on Linux), but don't run the Admin Console's Setup wizard.

    For more information on installing the application, see Installing Jive package and starting up.

  4. Start the primary node and navigate to its instance with a web browser. In the setup screen provided, enter the address of the cache server you installed, then complete the Admin Console Setup wizard.
  5. After you've finished with the Setup wizard, restart the node.
  6. Copy the jive.license file, the jive_startup.xml file, the search and crypto folders from the home directory on the primary node to the home directory in each of the other nodes in the cluster.

    The home directory is typically found here: /usr/local/jive/applications/your_instance_name/home.

  7. On each of the secondary nodes, remove the node.id file from the home directory. The application will correctly populate it on each node when they are started for the first time.
  8. Start the application on each of the secondary nodes (service jive-application start followed by service jive-httpd start). Because they are connecting to the same database used by the primary server, each secondary node detects that clustering is enabled and picks up the configuration you set on the primary node.
  9. Restart all the servers in the cluster to ensure that the address of each node in the cluster is known to all the other nodes.