Servigistics InService Deployment > Deploying Viewers in a Cluster > Installing and Configuring Viewers in a Cluster
  
Installing and Configuring Viewers in a Cluster
The following sections provide an example of how you can deploy Servigistics InService in a cluster environment.
* 
Differences in syntax and file names for Linux/UNIX and Windows operating systems are called out during the document. Gernarlly speaking, if you are using a Windows operating system, use the appropriate variable syntax.
For example:
Change ${ENIGMA_CONFIG_HOME} to %ENIGMA–CONFIG–HOME%. In addition, .bat file statements should start with “SET” when using Windows commands
In Linux/UNIX there are shell script (.sh) files and in Windows there are batch files (.bat) file.
Considerations for Windows Operating System
Servigistics InService does not support <InS_Data> folders (data directories) residing on a drive mounted in Windows. Only UNC paths are supported. For example: \\<hostname>\ptc\InS_Data
Servigistics InService only supports <InS_SW> folders (software directory) on local servers. <InS_SW> on a remote server/location is currently not supported.
Make the following configuration change.
1. Go to the Registry (Run->regedit).
2. Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip6\Parameters
3. Add a new parameter named DisabledComponents with a value of ff.
4. Reboot machine.
Before You Begin
Before you begin you must:
1. Install and configure your Oracle database.
2. Define a Viewer cluster name.
Establish a cluster name for the Viewers. This is the name or URL used by users to access the system. For example, mycompany.ptc.com
On all Viewer servers, modify the loopback line in/etc/hosts so that each Viewer responds to the intended cluster name:
127.0.0.1 localhost mycompany.ptc.com
3. Configure the shared space between Publisher and Viewers:
For TAL tasks, use a shared location where the Publisher can write TAL content and Viewers can access it and then write to its E3C Storage.
* 
In this example this location is under /ptcdata/bundles. This is an NFS mounted location that all these servers can access.
4. Configure the shared space between Viewers:
The E3C Storage location in each data center should be a shared location accessible by all Viewers in that data center.
* 
In this example the E3C Storage location is under /ptcdata/E3C/. This is an NFS mounted location that all these Viewers can access. In this configuration, the InService Data and Work directories are created on these shared disk locations.
Install the Windchill Directory Server
If you are configuring for a High Availability solution then the Windchill Directory Server must be installed outside the Servigistics InService loadpoint. If you are not installing in a High Availability solution then you may skip this section and proceed to the installation of the Publisher.
To install the Windchill Directory Server use the following procedure:
1. Launch the Servigistics InService installer. For more information on launching and using this installer see Using the Servigistics InService Installer.
2. Select Standalone Product or Component.
3. Select Windchill Directory Server.
4. Choose a location NOT in the intended Servigistics InService load point. It can be next to it. For example if Servigistics InService is to be installed at /ptc/InService, then install the Windchill Directory Server at /ptc/WindchillDS_10.2/WindchillDS.
5. When setting your LDAP settings ensure that the configuration, administration, and enterprise administration branches, finish the ‘o=’ statement. For example, ‘o=ptc’
6. Complete the installation
7. Perform the following steps to allow the Publisher (or Viewer) to be installed on the same server as the Windchill Directory Server:
Within the load point tar or zip the installer folder.
Remove the installer folder (keeping the tar or zip file)
Once the installer folder is removed, Publisher/Viewer installer can proceed.
Install the Publisher
To install the Publisher see Using the Servigistics InService Installer. Note the following:
After selecting New Product Installation select the Publisher option.
Create new user on existing database. Ensure that the Create database schema and load base data option is selected.
If you already have an existing Windchill Directory Server (as is necessary when using a High Availability solution) select Configure to an existing installation option. Otherwise, install and configure a new Windchill Directory Server.
Install Viewers
To install the Publisher see Using the Servigistics InService Installer. Note the following:
On all Viewer servers, modify the loopback line in /etc/hosts so that each Viewer responds to the intended cluster name. For example: 127.0.0.1 insrv.mycompany.com
After selecting New Product Installation select the Viewer option.
Create new user on existing database. Ensure that the Create database schema and load base data option is selected.
* 
If you are installing Viewers in a multi–site cluster then you must specify different Oracle schemas, Data directories, and Work directories for each site in the cluster.
If you already have an existing Windchill Directory Server (as is necessary when using a High Availability solution) select Configure to an existing installation. Otherwise, install and configure a new Windchill Directory Server.
For the web server use the cluster name that was added to the hosts file. For example: insrv.mycompany.com
The Data and Work directories should list the intended location of the E3C Storage location, where all Viewers in the intended cluster can access this shared location.
Publisher Post Installation Steps
The following must be performed on the Publisher after it has been installed:
1. Stop any coreServer, coreCMIserver, and jboss services that may be running.
2. Backup E3C.properties. Backup E3C.properties by entering the following:
$ cd <InService>/SW
$ cp –p E3C.properties E3C.properties.bak
3. Edit E3C.properties to contain the following. Some are new properties, others are existing properties to be modified.
* 
Do not remove any existing parameters that were established during installation.
For Linux/UNIX:
proxy.url=<Publisher hostname>:8080
package.destination.folder=/ptcdata/bundles/StorePackets
pub.host.name=<Publisher hostname>
pub.port=8080
Site1.host.name=<Viewer in Site 1 to manage TAL activity>
remote.port=8080

For Windows:
proxy.url=<publisher hostname>:8080
package.destination.folder=\\\\ptcdata\\bundles\\StorePackets
pub.host.name=<publisher hostname>
pub.port=8080
Site1.host.name=<Viewer in Site 1 to manage TAL activity>
remote.port=8080

* 
Ensure that you are adding a site host name (Site1.host.name) for each Viewer in the cluster.
4. Edit E3C.properties to modify references to ‘localhost’ to instead refer to the actual server hostname. Adjust these properties from:
core.server.host=localhost
bl.host.name=localhost
pr.host.name=localhost
to
core.server.host=<Publisher host name>
bl.host.name=<Publisher host name>
pr.host.name=<Publisher host name>
5. Backup sitesDescriptor.xml by entering the following:
$ cd <InService>/Config/System/Config
$ cp –p sitesDescriptor.xml sitesDescriptor.xml.bak
.
6. Edit sitesDescriptor.xml as follows:
Adjust Datatransfer node of the Preview Site to be, depending on your operating system (Windows or Linux)
For Linux/UNIX:
<Datatransfer OS=”linux” path=”${package.destination.folder}”/>
For Windows:
<Datatransfer OS=”WindowXp” path=”${package.destination.folder}”/>
Add a new “Runtime” Site node within <Sites></Sites>:
For Linux/UNIX:
<Site id="Runtime–Site1">
<Distribution type="FS">
<Datatransfer OS="linux" path="${package.destination.folder}"/>
</Distribution>
<Communication url="${Site1.host.name}:${remote.port}"/>
</Site>
For Windows:
<Site id="Runtime–Site1">
<Distribution type="FS">
<Datatransfer OS="WindowsXP" path="${package.destination.folder}"/>
</Distribution>
<Communication url="${Site1.host.name}:${remote.port}"/>
* 
Repeat this step for each site in the cluster.
7. Add a new Group node within <SiteGroups></SiteGroups>
For Linux/UNIX:
<Group id="Runtime">
<Distribution type="FS">
<DatatransferOS="linux" path="${package.destination.folder}"/>
</Distribution>
<Site siteType="REMOTE" id="Runtime-Site1" master="YES" />
</Group>
For Windows:
<Group id="Runtime">
<Distribution type="FS">
<Datatransfer OS="windowsXP" path="${package.destination.folder}"/>
</Distribution>
<Site siteType="REMOTE" id="Runtime-Site1" master="YES" />
</Group>
* 
If there are multiple sites within SiteGroup then only one should contain the master="Yes" attribute.
8. Repeat the above steps for each site in the cluster.
9. Backup SiteInfo.conf.xml
$ cd <InService>Config/System/Config
$ cp –p SiteInfo.conf.xml SiteInfo.conf.xml.bak
10. Modify SiteInfo.conf.xml by adjusting the distribution node from (depending on your operating system, Linux/UNIX or Windows):
<Distribution type="FTP">
<Datatransfer hostname="${ftp.host.name}" port="${ftp.port}"
username="${ftp.user.name}" password="${ftp.password}"
transferMode="binary" path="${ftp.relative.path}"/>
</Distribution>
to For Linux/UNIX:
<Distribution type="FS">
<Datatransfer OS="linux" path="${package.destination.folder}"/>
</Distribution>
or for Windows:
<Distribution type="FS">
<Datatransfer OS="WindowXP" path="${package.destination.folder}"/>
</Distribution>
Optionally, you may create a startup script to start all Publisher processes. To do this, create the startup script in the <InService>/SW folder:
For Linux/UNIX
#!/bin/sh –e

# this version of startup commands will start services in the background
#./coreServer.sh startservice
#./coreCMIServer.sh startservice
#./jboss.sh &

# this version of startup commands will start services in separate xterm windows.
Easier for testing and validation
xterm –title "Core Server" –sb –sl 9999 –e "cd /ptc/
InService/SW && ./coreServer.sh" &
xterm –title "Core CMI Server" –sb –sl 9999 –e "cd /ptc/
InService/SW && ./coreCMIServer.sh" &
xterm –title "InService"
–sb –sl 9999 –e "cd /ptc/
InService/SW && ./jboss.sh" &
For Windows:
INFO NEEDED
Viewer Post Installations Steps
The following must be performed on the Viewer after it has been installed:
1. Stop any coreServer, coreCMIserver, and jboss services that may be running.
2. Backup E3C.properties by entering the following:
$ cd <InService>/SW
$ cp –p E3C.properties E3C.properties.bak
3. Edit E3C.properties to contain the following. Some are new properties, others are existing properties to be modified. Additionally, if there are multiple sites in the cluster then some properties will have different values depending on their site.
* 
Do not remove any existing parameters that were established during installation.
For Linux/UNIX:
proxy.url=<Publisher hostname>:8080
coreCMI.server.name=<primary Viewer hostname>
package.destination.folder=/ptcdata/bundles/StorePackets
pub.host.name=<Publisher hostname>
pub.port=8080
remote.host.name=<Publisher host name>
remote.port=<Publisher port>
core.server.host1=<Viewer 1 hostname>
core.server.host2=<additional Viewer host name>
core.server.port1=<initial coreServer port>
core.server.port2=<additional coreServer port>
bl1.host.name=<initial Viewer host name>
bl2.host.name=<additional Viewer host name>
pr1.host.name=<initial Viewer host name>
pr2.host.name=<additional Viewer host name>
lb.master.host=<initial Viewer host name>
lb.backup.host=<additional Viewer host name>
lb.port=2020
For Windows:
proxy.url=<publisher hostname>:8080
coreCMI.server.name=<primary viewer hostname>
package.destination.folder=\\\\ptcdata\\bundles\\StorePackets
pub.host.name=<publisher hostname>
pub.port=8080
remote.host.name=<publisher host name>
remote.port=<publisher port>
core.server.host1=<viewer 1 hostname>
core.server.host2=<additional viewer host name>
core.server.port1=<initial coreServer port>
core.server.port2=<additional coreServer port>
bl1.host.name=<initial viewer host name>
bl2.host.name=<additional viewer host name>
pr1.host.name=<initial viewer host name>
4. Edit E3C.properties to modify references to ‘localhost’ to instead refer to the actual server hostname. Adjust these properties from:
core.server.host=localhost
bl.host.name=localhost
pr.host.name=localhost
to
core.server.host=<Viewer host name>
bl.host.name=<Viewer host name>
pr.host.name=<Viewer host name>
5. Backup setEnv.sh / setEnv.bat :
For Linux/UNIX:
$ cd <InService>/SW
$ cp –p setEnv.sh setEnv.sh.bak
For Windows:
$ cd <InService>/SW
$ cp –p setEnv.bat setEnv.bat.bak
6. Edit setenv.sh / setEnv.bat to contain the following. If there are multiple sites in the cluster then some properties will have different values depending on their site.:
* 
Do not remove any existing parameters that were established during installation.
In .bat files statements should start with “SET” in Windows.
Linux:
# ===================================
# core server Load Balancer variables
# ===================================
LB_MASTER_HOST=<initial Viewer host name>
LB_BACKUP_HOST=<additional Viewer host name>
LB_PORT=2020
Windows:
# ===================================
# core server Load Balancer variables
# ===================================
SET LB_MASTER_HOST=<initial Viewer host name>
SET LB_BACKUP_HOST=<additional Viewer host name>
SET LB_PORT=2020
7. Backup SiteInfo.conf.xml:
$ cd <Servigistics InService>Config/System/Config
$ cp –p SiteInfo.conf.xml SiteInfo.conf.xml.bak
8. Modify SiteInfo.conf.xml
a. If you are using 2 Viewers that use the same E3C Storage then you need to add additional URLs to the PresentationLayers and BusinessLayers.
* 
The following new parameters are used which should be established in 3C.properties :
pr1.host.name
pr2.host.name
bl1.host.name
bl2.host.name
<PresentationLayers>
<PL Name="PLLayer1" Url="http://${pr1.host.name}:
${web.server.port}/Deployment/CommandServlet.srv?context
=Deployment.Application.Presentation"/>
<PL Name="PLLayer2" Url="http://${ pr2.host.name}:
${web.server.port}
/Deployment/CommandServlet.srv?context=
Deployment.Application.Presentation"/>
</PresentationLayers>
<BusinessLayers>
<BL Name="BLLayer1" Url="http://${bl1.host.name}:
${web.server.port}
/DeploymentApp/CommandServlet.srv?context=Deployment.Application.
BusinessLogic"/>
<Communication Host="${bl1.host.name}" Url="${bl1.host.name}:
${app.server.port}" />
<BL Name="BLLayer2" Url="http://${bl2.host.name}:
${web.server.port}/
DeploymentApp/CommandServlet.srv?context=Deployment.Application.
BusinessLogic"/>
<Communication Host="${bl2.host.name}" Url="${bl2.host.name}:
${app.server.port}" />
</BusinessLayers>
9. Uncomment and edit the LoadBalancer node so as follows:
<LoadBalancer ServerCrb="ServerCrb" LoadBalancerCrb="LoadBalancerCrb">
<BL Name="BLLayer2"/>
<PL Name="PLLayer1"/>
<BL Name="BLLayer1"/>
<PL Name="PLLayer2"/>
</LoadBalancer>
10. Locate the Server node that references AdminRef1 and modify as follows:
<Server Type="Read" Master="true" ServerCrb="ServerCrb" AdminRef =
"AdminRef1" >
<PL Name="PLLayer1"/>
<BL Name="BLLayer1"/>
</Server>
11. Add additional server nodes for each core server in the cluster. Each server node should reference appropriate combinations of AdminRef nodes, Presentation, and Business components.
<Server Type="Read" Master="true" ServerCrb="ServerCrb"
AdminRef ="AdminRef2" >
<PL Name="PLLayer1"/>
<BL Name="BLLayer1"/>
</Server>
<Server Type="Read" Master="true" ServerCrb="ServerCrb"
AdminRef ="AdminRef3" >
<PL Name="PLLayer2"/>
<BL Name="BLLayer2"/>
</Server>
<Server Type="Read" Master="true" ServerCrb="ServerCrb"
AdminRef ="AdminRef4" >
<PL Name="PLLayer2"/>
<BL Name="BLLayer2"/>
</Server>
12. Adjust the Distribution node from (depending on your operating system, Linux, UNIX or Windows):
<Distribution type="FTP">
<Datatransfer hostname="${ftp.host.name}" port="${ftp.port}"
username="${ftp.user.name}" password="${ftp.password}"
transferMode="binary" path="${ftp.relative.path}"/>
</Distribution>
to for Linux/UNIX:
<Distribution type="FS">
<Datatransfer OS="linux" path="${package.destination.folder}"/>
</Distribution>
or for Windows
<Distribution type="FS">
<Datatransfer OS="WindowXP" path="${package.destination.folder}"/>
</Distribution>
13. Backup customContext_3.conf.xml
$ cd <InService>/Config/System/Config
$ cp –p customContext_3.conf.xml customContext_3.conf.xml.bak
14. Modify customContext_3.conf.xml
* 
There are three customContext_X.conf.xml files; _1, _2, and _3. They are processed in that order during startup. A best practice is to modify _3 only. If the Component already exists in _1, copy that component to _3 and modify it as needed (as is done below with the ServerCrb_Upd1 and AdminRef_Upd1 components).
15. Modify customContext_3.conf.xml. by adding the following Component nodes within <Components></Components>.
<Component Name="ServerCrb">
<Creation Type="RegisteredCrb">
<Helper>com.enigma.Titan.crb.ServerCrbHelper</Helper>
<CorbaServiceLocator>Core</CorbaServiceLocator>
<IOR>corbaloc:iiop:${lb.master.host}:${lb.port},:${lb.backup.host}:
${lb.port}/ObjectNameServer</IOR>
</Creation>
</Component>

<Component Name="ServerCrb_Upd1" Singleton="true">
<Creation Type="RegisteredCrb">
<Helper>com.enigma.Titan.crb.ServerCrbHelper</Helper>
<IOR>corbaloc:iiop:${coreCMI.server.name}:${coreCMI.server.port}/
ObjectNameServer</IOR>
</Creation>
</Component>

<Component Singleton="true" Name="AdminRef_Upd1">
<Creation Type="RegisteredCrb">
<Helper>com.enigma.Titan.crb.ServerAdminCrbHelper</Helper>
<IOR>corbaloc:iiop:${coreCMI.server.name}:${coreCMI.server.port}/
ObjectNameServerAdmin</IOR>
</Creation>
</Component>

<Component Singleton="true" Name="AdminRef1">
<Creation Type="RegisteredCrb">
<Helper>com.enigma.Titan.crb.ServerAdminCrbHelper</Helper>
<IOR>corbaloc:iiop:${core.server.host1}:${core.server.port1}/
ObjectNameServerAdmin</IOR>
</Creation>
</Component>

<Component Singleton="true" Name="AdminRef2">
<Creation Type="RegisteredCrb">
<Helper>com.enigma.Titan.crb.ServerAdminCrbHelper</Helper>
<IOR>corbaloc:iiop:${core.server.host1}:${core.server.port2}/
ObjectNameServerAdmin</IOR>
</Creation>
</Component>

<Component Singleton="true" Name="AdminRef3">
<Creation Type="RegisteredCrb">
<Helper>com.enigma.Titan.crb.ServerAdminCrbHelper</Helper>
<IOR>corbaloc:iiop:${core.server.host2}:${core.server.port1}/
ObjectNameServerAdmin</IOR>
</Creation>
</Component>

<Component Singleton="true" Name="AdminRef4">
<Creation Type="RegisteredCrb">
<Helper>com.enigma.Titan.crb.ServerAdminCrbHelper</Helper>
<IOR>corbaloc:iiop:${core.server.host2}:${core.server.port2}/
ObjectNameServerAdmin</IOR>
</Creation>
</Component>

<Component Name="LoadBalancerCrb" Singleton="true" >
<Creation Type="RegisteredCrb" >
<Helper>com.enigma.Titan.crb.LBControlCrbHelper</Helper>
<IOR>corbaloc:iiop:${lb.master.host}:${lb.port},:
${lb.backup.host}:${lb.port}/ObjectNameLBControl</IOR>
</Creation>
</Component>
Configure Multiple Core Servers
To configure multiple core servers for cluster configurations copy the default coreServer configuration XML and logger properties files and then modify as needed:
$ cd <InService>/Config/System/Config/Core
$ cp serverCfg.xml serverCfg01.xml
$ cp coreLoggerCfg.properties coreLoggerCfg01.properties
Edit serverCfg01.xml
Edit serverCfg01.xml by:
1. Modifying the logger node as follows:
<Logger Path=”coreLoggerCfg01.properties”/>
2. Update host and port number in this line:
<Param Name="-ORBendPointNoPublish" Value="${CORE_ORBendPointNoPublish:
-giop:tcp:127.0.0.1:2020}"/>
Instead of 127.0.0.1, add the actual hostname, and the port number will be 2031. For example:
<Param Name="-ORBendPointNoPublish" Value="${CORE_ORBendPointNoPublish:
-giop:tcp:<Hostname>:2031}"/>
3. Update following line:
<Param Name="-ORBendPointPublish" Value="${CORE_ORBendPointPublish:
-giop:tcp:127.0.0.1:2020}"/>
Change it to:
<Param Name="-ORBendPointPublish" Value="${CORE_ORBendPointPublish:
-giop:tcp:<master load balancer host name>:
2020,giop:tcp:<backup load balancer host name>:2020}"/>
 
4. Update following line:
<Param Name="-TServerPIDFile" Path="coreServer.pid" />
Change it to:
<Param Name="-TServerPIDFile" Path="coreServer01.pid" /> 
5. Near the bottom of the file, enable the KeepEnigmaLTCache entry by entering the following:
<KeepEnigmaLTCache/>
6. Save modifications
7. Edit coreLoggerCfg01.properties by
Modifying the following lines to adjust log file creation:
log4j.appender.FILE_APPENDER.fileName=${core_log}/coreServer01.log
Optionally, setting the output level to DEBUG by entering the following:
log4j.rootLogger=DEBUG, CONSOLE_APPENDER, FILE_APPENDER
8. Copy the new coreServer config XML (serverCfg01.xml) and Logger properties file so that there is one for each intended coreServer to be run simultaneously. For example:
$ cp serverCfg01.xml serverCfg02.xml
$ cp coreLoggerCfg01.properties coreLoggerCfg02.properties
9. Modify serverCfg02.xml (and any additional coreServer config files) to have unique logger properties, port numbers and pid file:
<Logger Path=”coreLoggerCfg02.properties”/>
<Param Name="-ORBendPointNoPublish" Value="${CORE_ORBendPointNoPublish:
-giop:tcp:<Hostname>:2032}"/>
Also, update the following statement:
<Param Name="-TServerPIDFile" Path="coreServer01.pid" /> 
To:
<Param Name="-TServerPIDFile" Path="coreServer02.pid" />  
* 
<Hostname> should be the Viewer on which you are updating ServerCfg.xml.
10. Modify coreLoggerCfg02.properties (and any additional Logger properties files) to have a unique log file name.
log4j.appender.FILE_APPENDER.fileName=$
{core_log}/coreServer02.log
11. Copy the coreServer startup script and modify as needed:
For Linux/UNIX:
$ cd <InService>/SW
$ cp coreServer.sh coreServer01.sh
For Windows:
$ cd <InService>/SW
$ cp coreServer.bat coreServer01.bat
12. Edit coreServer01.sh to reference the serverCfg01.xml created earlier. Content should now look like for the specific operating systems.
For Linux/UNIX:
ENIGMA_WORK_HOME=$ENIGMA_WORK_HOME/System/Work/Core/${HOSTNAME}/coreServer–1

if [ "$1" = "startservice" ] ; then
nohup ./coreServer.exe ${BIND_CMD} ${ENIGMA_CONFIG_HOME}/System/Config/
Core/serverCfg01.xml
&elif [ "$1" = "" ]; then
./coreServer.exe ${BIND_CMD} ${ENIGMA_CONFIG_HOME}/System/Config/
Core/serverCfg01.xml
Fi
* 
Tthe use of ${HOSTNAME} is to provide a unique folder for the coreServer within the Work directory that is shared to the cluster.
For Windows:
Set ENIGMA_WORK_HOME=$ENIGMA_WORK_HOME/System/Work/Core/<ViewerHostName>/

if defined CORE_BINDING_MODE coreServer.exe -TBindType %CORE_BINDING_MODE%
%ENIGMA_CONFIG_HOME%\System\Config\Core\serverCfg01.xml

if not defined CORE_BINDING_MODE coreServer.exe
%ENIGMA_CONFIG_HOME%\System\Config\Core\serverCfg01.xml
* 
* 
Tthe use of ${HOSTNAME} is to provide a unique folder for the coreServer within the Work directory that is shared to the cluster. If using a Windows operating system, change ${ENIGMA_CONFIG_HOME} to %ENIGMA–CONFIG–HOME%
13. Copy the new startup script (coreServer01.sh / coreServer01.bat) so that there is one for each intended coreServer to be run simultaneously:
For Linux/UNIX:
$ cp coreServer01.sh coreServer02.sh
For Windows:
$ cp coreServer01.bat coreServer02.bat
14. Edit coreServer02.sh / coreServer02.bat to reference the serverCfg02.xml created earlier. Content should now look like:
For Linux/UNIX:
ENIGMA_WORK_HOME=$ENIGMA_WORK_HOME/System/Work/Core/${HOSTNAME}/coreServer–2

if [ "$1" = "startservice" ] ; then
nohup ./coreServer.exe ${BIND_CMD} ${ENIGMA_CONFIG_HOME}/System/Config/
Core/serverCfg02.xml &
elif [ "$1" = "" ]; then
./coreServer.exe ${BIND_CMD} ${ENIGMA_CONFIG_HOME}/System/Config/Core/
serverCfg02.xml
fi
For Windows:
if defined CORE_BINDING_MODE coreServer.exe -TBindType
%CORE_BINDING_MODE% %ENIGMA_CONFIG_HOME%\System\Config\Core\serverCfg02.xml
if not defined CORE_BINDING_MODE coreServer.exe
%ENIGMA_CONFIG_HOME%\System\Config\Core\serverCfg02.xml
15. Configure core server load balancer:
Backup and modify the load balancer startup script to work with ALL coreServer instances in the cluster:
For Linux/UNIX:
$ cd <InService>SW
$ cp loadBalancer.sh loadBalancer.sh.bak
For Windows:
$ cd <InService>/SW
$ cp loadBalancer.bat loadBalancer.bat.bak
Modify the LBserverEndPoint parameters so that each coreServer on each (planned) Viewer is listed. In this case, there are 2 Viewers each with two coreServers. The script was modified from:
Linux:
LB_OPTS="${LB_OPTS} –LBserverEndPoint corbaloc:iiop:localhost:2010
/ObjectNameServerCtrl "
#LB_OPTS="${LB_OPTS} –LBserverEndPoint corbaloc:iiop:host2:2010
/ObjectNameServerCtrl "
#LB_OPTS="${LB_OPTS} –LBserverEndPoint corbaloc:iiop:host3:2010
/ObjectNameServerCtrl "
to
LB_OPTS="${LB_OPTS} –LBserverEndPoint corbaloc:iiop:Viewer1.mycompany.com:2032
/ObjectNameServerCtrl "
LB_OPTS="${LB_OPTS} –LBserverEndPoint corbaloc:iiop:Viewer2.mycompany.com:2031
/ObjectNameServerCtrl "
LB_OPTS="${LB_OPTS} –LBserverEndPoint corbaloc:iiop:Viewer2.mycompany.com:2032
/ObjectNameServerCtrl "
Windows:
set LB_OPTS=%LB_OPTS% -LBserverEndPoint corbaloc:iiop:host1:2010
/ObjectNameServerCtrl
set LB_OPTS=%LB_OPTS% -LBserverEndPoint corbaloc:iiop:host2:2010
/ObjectNameServerCtrl
REM set LB_OPTS=%LB_OPTS% -LBserverEndPoint corbaloc:iiop:host3:2010
/ObjectNameServerCtrl
to
Set LB_OPTS=%LB_OPTS% -LBserverEndPoint corbaloc:iiop:ngs2-viewa1.ptc.com:2031
/ObjectNameServerCtrl
Set LB_OPTS=%LB_OPTS% -LBserverEndPoint corbaloc:iiop:ngs2-viewa1.ptc.com:2032
/ObjectNameServerCtrl
Set LB_OPTS=%LB_OPTS% -LBserverEndPoint corbaloc:iiop:ngs2-viewa2.ptc.com:2031
/ObjectNameServerCtrl
Set LB_OPTS=%LB_OPTS% -LBserverEndPoint corbaloc:iiop:ngs2-viewa2.ptc.com:2032
/ObjectNameServerCtrl
* 
The hostnames should be different per site. The Site 2 load balancers should NOT be routing to Site 1 coreServers. The following change to the load balancer configures only a master load balancer in what will be an overall active/passive configuration for the core server load balancers. The backup load balancer is configured after the rsync.
16. In the loadBalancer.sh / loadBalancer.bat script, modify the host name and port number that it monitors. They are now parameterized and managed in setEnv.sh / setEnv.bat. Comment out the original statement:
Linux:
ORB_OPTS="${ORB_OPTS} –ORBendPoint giop:tcp::2000 "
LB_OPTS="${LB_OPTS}
–LBserverEndPoint corbaloc:iiop:Viewa1.mycompany.com:2031/ObjectNameServerCtrl "
Then add the following:
ORB_OPTS="${ORB_OPTS} –ORBendPoint giop:tcp:${LB_MASTER_HOST}:${LB_PORT} "

ORB_OPTS="${ORB_OPTS} –ORBendPointNoListen giop:tcp:${LB_BACKUP_HOST}:${LB_PORT} "
Windows:
Set ORB_OPTS=%ORB_OPTS% -ORBendPoint giop:tcp::2000
Then add the following:
set ORB_OPTS=%ORB_OPTS% -ORBendPoint giop:tcp:%LB_MASTER_HOST%:%LB_PORT%
set ORB_OPTS=%ORB_OPTS% -ORBendPointNoListen giop:tcp:%LB_BACKUP_HOST%:%LB_PORT%
17. Backup 3C.properties
$ cd /SW/InService/
$ cp –p 3C.properties 3C.properties.bak
Configure the Windchill Cluster
For the Windchill component of Servigistics InService configure an identical node cluster. For more information refer to the Windchill Advanced Deployment Guide.
On each Viewer server, run the following commands:
$ cd <InService>/InS_SW/SW/Applications/Windchill.ear/bin
$ ./xconfmanager –t codebase/wt.properties –p –s “wt.cache.master.slaveHosts=
<Site1_Viewer1_hostname> <Site1_Viewer2_hostname>”
* 
All planned Viewers in all sites should be listed under this variable.
Windchill Configurations for Additional Sites
If multiple sites are used in your cluster then the following configurations need to be done.
1. Ensure that only one Windchill Directory Server is referenced by all sites. To do this run the following xconfmanager commands on the additional site Viewers to force them to reference the first site Windchill Directory Server:
$ cd <Servigistics InService>/
/InS_SW/SW/Applications/Windchill.ear/bin

$ ./xconfmanager –t codebase.war/wt.properties –p –s
"wt.federation.ie.ldapServer=ldap://ngs2–viewa1.ptc.com:1389"

$ ./xconfmanager –t codebase.war/wt.properties –p –s
"wt.federation.ie.namingService=com.ptc.ngs2repl.namingService"

$ ./xconfmanager –t codebase.war/wt.properties –p –s
"wt.federation.ie.VMName=com.ptc.ngs2repl.InService"

$ ./xconfmanager –t codebase.war/WEB–INF/ieStructProperties.txt –p –s
"ie.ldap.managerDn=cn=Manager"

$ ./xconfmanager –t codebase.war/WEB–INF/ieStructProperties.txt –p –s
"ie.ldap.managerPw=wcadmin"

$ ./xconfmanager –t codebase.war/WEB–INF/ieStructProperties.txt –p –s
"ie.ldap.serverHostName=ngs2–viewa1.ptc.com"

$ ./xconfmanager –t codebase.war/WEB–INF/ieStructProperties.txt –p –s
"ie.ldap.serverPort=1389"

$ ./xconfmanager –t codebase.war/WEB–INF/ieStructProperties.txt –p –s
"ie.ldap.serviceName="

$ ./xconfmanager –t codebase.war/WEB–INF/MapCredentials.txt –p –s
"mapcredentials.admin.adapters=com.ptc.Ldap^cn\
=Manager^wcadmin;com.ptc.EnterpriseLdap^cn\=Manager^wcadmin"
2. On the Wildfly server backup and then modify standalone–full.xml to reference the Windchill Directory Server in the first site, by using the following commands:
$ cd <InService>/InS_SW/SW/System/WildFly/standalone/configuration
$ cp standalone–full.xml standalone–full.xml.bak
* 
Within standalone–full.xml there is a <security–domain> node with the name of “Servigistics InService”. This node mostly contains references to the standard Enterprise and Administrative LDAP locations in Windchill Directory Server. The content of this node must be adjusted to reference the first site Windchill Directory Server branch. A best practice would be to view this content in the first site 1 standalone–full.xml and ensure the same content is listed in the Site 2 version.
* 
DO NOT just copy this standalone–full.xml file from the first Site to the next Site 2, as it contains unique database references.
<security–domain name="InService" cache–type="default">
<authentication>
<login–module code="com.ptc.sc.sce.user.SCEDefaultLoginModule"
flag="optional"/>
<login–module code="org.jboss.security.auth.spi.LdapExtLoginModule"
flag="sufficient">
<module–option name="java.naming.provider.url"
value="ldap://<Site 1 WindchillDS hostname>:<ldap port>"/>
<module–option name="bindDN" value="cn=Manager"/>
<module–option name="bindCredential"
value="<Site 1 WindchillDS password>"/>
<module–option name="baseCtxDN"
value="ou=people,cn=EnterpriseLdap,cn=InService,o=PTC"/>
<module–option name="baseFilter" value="(uid={0})"/>
<module–option name="defaultRole" value="valid–user"/>
<module–option name="rolesCtxDN"
value="ou=people,cn=EnterpriseLdap,cn=InService,o=PTC"/>
<module–option name="roleFilter" value="(member={1})"/>
<module–option name="roleAttributeID" value="cn"/>
</login–module>
<login–module code="org.jboss.security.auth.spi.LdapExtLoginModule"
flag="sufficient">
<module–option name="java.naming.provider.url"
value="ldap://<>:1389"/>
<module–option name="bindDN" value="cn=Manager"/>
<module–option name="bindCredential" value="wcadmin"/>
<module–option name="baseCtxDN"
value="ou=people,cn=AdministrativeLdap,cn=InService,o=PTC"/>
<module–option name="baseFilter" value="(uid={0})"/>
<module–option name="defaultRole" value="valid–user"/>
<module–option name="rolesCtxDN"
value="ou=people,cn=AdministrativeLdap,cn=InService,o=PTC"/>
<module–option name="roleFilter" value="(member={1})"/>
<module–option name="roleAttributeID" value="cn"/>
</login–module>
</authentication>
</security–domain>
3. Configure the database schema. For this install process, we’ve established that the primary Windchill schema was created with the first site 1. To complete the Windchill cluster, we need to modify the additional sites to use this same Windchill schema as well.
* 
The additional sites should reference ONLY the Windchill schema in the first site. The other schemas (CMI, E3C, Titan, Titan2) should remain unique to the additional sites.
4. Collect the following connection information for the Windchill schema in the first site:
Oracle host, port, and service name
Schema name and password
5. Run the following xconfmanager commands to apply this connection info to the additional sites:
$ cd <InService>/InS_SW/SW/Applications/Windchill.ear/bin

$ ./xconfmanager –t db/db.properties –p –s “wt.pom.dbUser=<Site
1 Windchill Schema name>”

$ ./xconfmanager –t db/db.properties –p –s “wt.pom.dbPassword=<Site
1 Windchill Schema password>”

$ ./xconfmanager –t db/db.properties –p –s “wt.pom.jdbc.host=<Site
1 Windchill Schema host>”

$ ./xconfmanager –t db/db.properties –p –s “wt.pom.jdbc.port=<Site
1 Windchill Schema port>”

$ ./xconfmanager –t db/db.properties –p –s “wt.pom.jdbc.service=<Site
1 Windchill Schema service name>”
Create a Startup Script to Start All Viewer Processes
This is optional. To start all Viewer processes automatically create a startup script in the <InService>/SW folder.
* 
This optional startup script will start everything needed for a monolithic Viewer; WildFly, core servers, core server load balancer, and core CMI server. All planned Viewers in all sites should be listed under this variable When using this script for a cluster, the core CMI server should only be started on one server within the site. The E3C.properties on all Viewers in the site will reference the host and port of this one core CMI server.
#!/bin/sh –e

# this version of startup commands will start services in the background
#./coreServer01.sh startservice
#./coreServer02.sh startservice
#./loadbalancer.sh &
#./coreCMIServer.sh startservice # should only be started on one Viewer
#./jboss.sh &

# this version of startup commands will start services in separate xterm windows.
Easier for testing and validation
xterm –title "Core Server 01" –sb –sl 9999 –e "cd /ptc/InService/SW &&
./coreServer01.sh" &
xterm –title "Core Server 02" –sb –sl 9999 –e "cd /ptc/InService/SW &&
./coreServer02.sh" &
xterm –title "Load Balancer" –sb –sl 9999 –e "cd /ptc/InService/SW &&
./loadbalancer.sh" &
xterm –title "Core CMI Server" –sb –sl 9999 –e "cd /ptc/InService/SW
&& ./coreCMIServer.sh" & # should only be started on one Viewer
xterm –title "InService" –sb –sl 9999 –e "cd /ptc/InService/SW && ./jboss.sh" &
Rsync Viewer load point to other Viewer servers
Use rsync to copy this Viewer load point to matching load points on the other Viewer server.
* 
Rsync is not supported on Windows. You must manually copy the <InService> folder to the other viewer server. You may run into errors if filenames are too long when manually copying files. You may want to use a 3rd party copy utility.
1. Entering the following command on the target server:
$ rsync –av –e ssh InService_account_name@Viewer1.mycompany.com
/ptc/InService/ /ptc/InService
2. Modify <InService>/InS_SW/SW/startServers.sh (or startServers.bat) so that a coreCMI process is NOT started on the newly created Viewer.
3. Modify <InService>/InS_SW/SW/E3C.properties, replacing references of the rsync source hostname with the actual hostname of the rsync target. Most likely the following properties should be adjusted with rsync target hostname.
core.server.host=<rsync target hostname>
bl.host.name=<rsync target hostname>
pr.host.name=<rsync target hostname>
computer.name=<rsync target hostname>
4. On the server designated to be the backup core server load balancer, modify the load balancer batch file.
For Linux: <INSERVICE>/InS_SW/SW/loadBalancer.sh
ORB_OPTS=
ORB_OPTS="${ORB_OPTS} –ORBendPoint giop:tcp:${LB_MASTER_HOST}:${LB_PORT} "
ORB_OPTS="${ORB_OPTS} –ORBendPointNoListen giop:tcp:${LB_BACKUP_HOST}:${LB_PORT} "
to
ORB_OPTS=
ORB_OPTS="${ORB_OPTS} –ORBendPoint giop:tcp:${LB_BACKUP_HOST}:${LB_PORT} "
ORB_OPTS="${ORB_OPTS} –ORBendPointNoListen giop:tcp:${LB_MASTER_HOST}:${LB_PORT} "
For Windows: <INSERVICE>/SW/loadBalancer.bat
Set ORB_OPTS=
Set ORB_OPTS=%ORB_OPTS% -ORBendPoint giop:tcp:%LB_MASTER_HOST%:%LB_PORT%
Set ORB_OPTS=%ORB_OPTS% -ORBendPointNoListen giop:tcp:%LB_BACKUP_HOST%:%LB_PORT%
to
set ORB_OPTS=
set ORB_OPTS="%ORB_OPTS% -ORBendPoint giop:tcp:%LB_BACKUP_HOST%:%LB_PORT%
Set ORB_OPTS="%ORB_OPTS% -ORBendPointNoListen giop:tcp:%LB_MASTER_HOST%:%LB_PORT%
5. Repeat for each site.
Provide a Load Balancer
Provide a load balancer to route user traffic to all Viewers in all sites. See Architecture Overview for more guidelines on use of load balancers. This example uses the HTTP server provided in a Windchill installation can be used as a load balancer.
* 
The load balancer can be configured to run as a service. See the “Starting the Load Balancer as a Service” section for more information.Deploying Viewers in a Cluster
1. Perform the standalone installation. Install the server to use port 8080.
2. Modify conf/httpd.conf to include the following as the last line in the file: conf/extra/load_balancer.conf
3. Create the conf/extra/load_balancer.conf file. The two <Proxy balancer…> nodes should reference the Viewer host names in your system.
#–––––––––––––––load balancer modules–––––#
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule status_module modules/mod_status.so
LoadModule headers_module modules/mod_headers.so
#–––––––––––––––––––––––––––––––––––––––––#

ExtendedStatus On

<Location /server–status>
SetHandler server–status
Order deny,allow
Deny from *.ptc.com
Allow from all
</Location>


#–––– Load Balancing (start)––––#

ProxyRequests off
ProxyTimeout 600

RequestHeader set X–Forwarded–Proto "https"

ProxyPass /balancer–manager !
ProxyPass /server–status !
ProxyPass /apstatus !

<Location "/balancer–manager">
SetHandler balancer–manager
Order deny,allow
Deny from *.ptc.com
Allow from all
</Location>

Header add Set–Cookie "ROUTEID=.%
{BALANCER_WORKER_ROUTE}e; path=/"
env=BALANCER_ROUTE_CHANGED

<Proxy balancer://stickyCluster>
BalancerMember http://<Viewer #1 hostname>:8080 route=1
BalancerMember http://<Viewer #2 hostname>:8080 route=2
ProxySet stickysession=ROUTEID
</Proxy>

<Proxy balancer://RESTfulCluster>
ProxySet failonstatus=503
BalancerMember http://<Viewer #1 hostname>:8080 retry=120
BalancerMember http://<Viewer #2 hostname>:8080 retry=120
</Proxy>

# For InService (as a RESTful app) the following ProxyPass and
ProxyPassReverse statements should work

ProxyPass /InService/infoengine balancer://stickyCluster/InService/infoengine
ProxyPassReverse /InService/infoengine balancer://stickyCluster/InService/infoengine

ProxyPass /InService balancer://RESTfulCluster/InService
ProxyPassReverse /InService balancer://RESTfulCluster/InService

ProxyPass /InService–HCUSER balancer://RESTfulCluster/InService–HCUSER
ProxyPassReverse /InService–HCUSER balancer://RESTfulCluster/InService–HCUSER

ProxyPass /InService–HCADMIN balancer://RESTfulCluster/InService–HCADMIN
ProxyPassReverse /InService–HCADMIN balancer://RESTfulCluster/InService–HCADMIN

ProxyPass / balancer://RESTfulCluster
ProxyPassReverse / balancer://RESTfulCluster

#–––– Load Balancing (end)––––#
4. Start the load balancer through the apache commands.
Starting the Load Balancer as a Service
For systems that have more that one Viewer, the load Balancer can be run as service.
Change the following line in loadBalancer.sh to set INSTALL_AS_SERVICE=1 
* 
To enable this, loadBalancer.sh needs to be run in Administrative mode.
Create cache folders for each coreServer in the system
As provided earlier, the Work directory is shared across the Viewers within a site. So cache folders must be distinguished by more than just name, but also the Viewer hostname providing the coreServer.
* 
These commands must be run on each site in the cluster.
Earlier, you added the following content to the coreServer startup scripts (coreServer01.sh or coreServer01.bat and coreServer02.sh or coreServer02bat.):
For Linux/UNIX:
(in coreServer01.sh)
ENIGMA_WORK_HOME=$ENIGMA_WORK_HOME/System/Work/Core/${HOSTNAME}/coreServer–1
(in coreServer02.sh)
ENIGMA_WORK_HOME=$ENIGMA_WORK_HOME/System/Work/Core/${HOSTNAME}/coreServer–2
For Windows:
(in coreServer01.bat)
ENIGMA_WORK_HOME=%ENIGMA_WORK_HOME%/System/Work/Core/%HOSTNAME%/coreServer-1
(in coreServer02.bat)
ENIGMA_WORK_HOME=%ENIGMA_WORK_HOME%/System/Work/Core/%HOSTNAME%/coreServer-2
After performing the rsync command, there now maybe be many Viewer servers, all sharing this Work folder. This is where the use of the ${HOSTNAME} variable above comes into use.
Within the $ENIGMA_WORK_HOME/System/Work/Core location, ensure that the following folder structure is created. Ensure that the same OS user that runs the coreServer can access the location.
$ENIGMA_WORK_HOME/System/Work/Core/
<Viewer #1 hostname>
coreServer–01
coreServer–02
<Viewer #2 hostname>
coreServer–01
coreServer–02
<continue pattern for any additional Viewers>
Start Servigistics InService
Go through the startup process, starting Oracle and WindchillDS if necessary, then the publisher, then one viewer at a time. The first viewer to be started will be the Windchill cache master.
Perform TAL Tasks
Perform TAL tasks as outlined in the Servigistics InService Publishing and Loading Guide.
Validate Cluster Functionality
1. This information does not provide a web server configuration to act as a load balancer. However, a basic configuration should be sufficient.
2. Without the load balancer, you can modify the client host file to direct cluster name traffic to one of the viewer servers.
3. You can start|stop services to test the experience. For instance, stop the coreServers on viewer_1, and observe that it uses the coreServers on viewer_2.