ThingWorx High Availability > PostgreSQL High Availability > Installing and Configuring PostgreSQL HA
Installing and Configuring PostgreSQL HA
The guidelines provided below are to support implementation of the previous diagram. They are intended for the PostgreSQL administrators and users implementing a PostgreSQL HA deployment.
For reference, Example Deployment of PostgreSQL HA with Pgpool-II contains an example walkthrough of downloads, command line sequences, and scripts used to implement this solution.
Reference Documents
Prior to installing PostgreSQL; read and understand all installation and configuration documents, including any documentation for required prerequisite software. It is important to understand and apply proper settings, including security recommendations.
The following links provide useful information towards the installation and configuration of PostgreSQL HA using streaming replication and Pgpool-II for node management.
Installing PostgreSQL and Creating a New User Role in PostgreSQL
Instructions to install and configure PostgreSQL are provided in the Installing ThingWorx guide. Refer to the install guide for the specific version of ThingWorx to be deployed. The same install and configure tasks must be performed on all three PostgreSQL nodes.
Configuring and Executing the PostgreSQL Database Script
Instructions to build the database within PostgreSQL are provided in the Installing ThingWorx guide. Refer to the install guide for the specific version of ThingWorx to be deployed. The same install and configure tasks should be performed on all three PostgreSQL nodes.
Configuring and Executing the Model/Data Provider Schema Script
Instructions to build the ThingWorx schema within PostgreSQL are provided in the Installing ThingWorx guide. Please refer to the install guide for the specific version of ThingWorx to be deployed. The same configure tasks should be performed on all three PostgreSQL nodes.
Create a replication user on all PostgreSQL nodes
Create a PostgreSQL user to manager replication tasks on all nodes. The same user name and password must be used.
Add replication parameters to all PostgreSQL nodes
The following table contains PostgreSQL parameters that control its replication services.
* 
The values listed in the configuration column reflect the example deployment in the reference architecture, but can be modified for your environment. For many of the settings in the table below, links are provided to help determine appropriate configuration values for your environment.
Setting
Configuration
Description
listen_addresses
‘*’
Listen on all network interfaces. In some situations, if there are multiple network interfaces, it is best to restrict this to specific network interfaces.
Port
5432
This is the default port number.
max_connections
200
The default in PostgreSQL is 100. The default setting in ThingWorx is 100 (maxpoolsize). Increase this number based on total number of concurrent connections expected in the database. This should always be higher than the number of servers in the cluster times the max pool size configured in the platform-settings.json file for ThingWorx.
shared_buffers
1024MB
Optional performance tuning. Sets the amount of memory the database server uses for shared memory buffers. It is recommended to set this at one fourth of the memory available on the machine. Refer to Resource Consumption > Memory.
work_mem
32MB
Optional performance tuning. Specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. Refer to Resource Consumption > Memory. Scroll down to work_mem.
maintenance_work_mem
512MB
Optional performance tuning. Specifies the maximum amount of memory to be used by maintenance operations. Refer to Resource Consumption > Memory. This option appears just after work_mem.
wal_level
enum
Optional performance tuning. Specifies the maximum amount of memory to be used by maintenance operations. Refer to Write Ahead Log > Settings.
synchronous_commit
on
Go to Write Ahead Log > Settings and scroll down to synchronous_commit (enum).
archive_mode
on
archive_command
‘cd .’
Go to Write Ahead Log > Archiving and scroll down to archive_command (string).
max_wal_size
10
Go to Write Ahead Log, and click the ARCHIVING link. Scroll UP to max_wal_size (integer). For details about WAL Configuration, refer to WAL Configuration in the PostgreSQL 10 documentation.
synchronous_standby_names
node1, node2 or node2, node0, or node0, node1
As a comma separated list, add the other node's "application_name" as specified in their recovery.conf files. See synchronous_commit in the Settings section of Write Ahead Log.
hot_standby
true/false
For information about this mode, refer to Hot Standby.
fsync
on
Go to Write Ahead Log > Settings, and scroll down to fsync.
checkpoint settings
Refer to Checkpoints for information on the checkpoint settings.
Establish start_replication and retargetMaster scripts
On each node, establish a script to activate replication services and ensure the PostgreSQL node is in sync with the system.
Also on each node, establish a script that can create and implement a recovery.conf file. The recovery.conf file should contain the changes needed to adjust which node is master and which is standby depending on the failure that occurs.
Adjust outside connection parameters to all PostgreSQL nodes
Modify the parameters in each pg_hba.conf file to allow the user and database to store the ThingWorx data to be accessed from the IP. This IP will connect to the database.
* 
If connecting from Pgpool-II, verify that the authentication access conforms to the Pgpool-II documentation (md5 or trust) documented here.
For additional information on the pg_hba.conf file, refer to the The pg_hba.conf File in the PostgreSQL 10 documentation.
Restart PostgreSQL services to initiate replication
Restart all PostgreSQL nodes in a sequence to control which node Is master (started first), which is the primary standby node (started second), and which is the third (started last).
For the master node, start the PostgreSQL services.
For the primary standby node, first execute the start_replication script to sync it with the master node. Then start the PostgreSQL services.
For the second standby node, first execute the start_replication script to sync it with the primary standby node. Then start the PostgreSQL services.
Install Pgpool-II nodes
Prior to installing Pgpool; read and understand all installation documents, including documentation for any necessary prerequisite software. It is important to understand and apply proper settings, including security recommendations.
Download information for Pgpool-II is available here:
Consult your operating system documentation for instructions on installing Pgpool-II. The same version of Pgpool-II should be installed on all nodes to run the Pgpool-II services.
Configure Pgpool-II nodes
The following table contains the parameters to modify for a Pgpool-II installation. All Pgpool-II parameters are maintained in the pgpool.conf file. These properties and values should be added to all Pgpool-II nodes.
Setting
Value
Description
listen_addresses
‘*’
Choose values that allow the ThingWorx application's Model Provider to connect to Pgpool-II, whether it is on the same or different server.
port
5432
TCP port to listen for client connections
pcp_listen_address
*
IP/host filter for Port Control Protocol (PCP) connections (* allows all)
pcp_port
9898
TCP port to listen for PCP connections
backend_hostname
backend_port
backend_weight
backend_data_directory
backend_flag
<ip of backend#>
<port of backend#>
1
/var/lib/postgresql/10.x/main
ALLOW_TO_FAILOVER
Set these backend configuration values for each of your three nodes (master and two standbys). For example, backend_hostname0 is your master, backend_hostname1 is standby one and so on. Refer to the PostgreSQL online documentation for further details.
enable_pool_hba
on
Enables pool_hba.conf
master_slave_mode
on
This tells PostgreSQL that you are using master/standby replication.
load_balance_mode
off
* 
PTC does not recommend load balancing for ThingWorx.
master_slave_sub_mode
stream
This tells Pgpool-II to use the out-of-the-box PostgreSQL streaming replication.
replication_mode
off
Do not use Pgpool-II's replication, instead use the out-of-the-box PostgreSQL streaming replication.
sr_check_period
10
Streaming replication delay in seconds
sr_check_user
replicator
Streaming replication user
sr_check_password
replicator
Streaming replication password
sr_check_database
postgres
Streaming replication database
health_check_user
postgres
Failover health check user
health_check_password
postgres
Failover health check password
health_check_database
postgres
Failover health check database
failover_command
/etc/pgpool2/failover.sh %d %h %D %m %H %R %M %P
See the failover_command section below for more information on this setting.
Also, see the following sample scripts in Appendix C:
failover.sh
retargetMaster_001.sh
retargetMaster_002.sh
retargetMaster_003.sh
num_init_children
max_pool
max_child_connections
superuser_reserved_connections
Performance tuning parameters. These settings are related to Pgpool-II's connection pooling features. Be sure to provide enough connections at startup required for your specific highest volume traffic throughput needs, as well as not to exceed max connections setting of the PostgreSQL DB nodes. Refer to the Pools section of the manual (see link above) for suggestions as well as formulas for calculating values.
* 
Ensure that num_init_children is greater than the maxpoolsize in modelproviderconfig.json and max_connections in postgresql.conf is greater than the num_init_children setting.
Configure the PostgreSQL failover script
Establish a failover script within each Pgpool-II node that the Pgpool-II service will call upon failure detection. This script should contain the logic and tasks to perform for any PostgreSQL node failure.
Update PCP configuration
Update the pool_hba.conf file recognize all ThingWorx servers.
For ThingWorx requests to PostgreSQL, Pgpool-II will connect to the PostgreSQL nodes using the ThingWorx credentials to maintain established access privileges and restrictions. The pool_hba.conf file uses the same format, authentication methods, and authentication options (md5, trust, and so forth) as pg_hba.conf.
For details about configuring pool_hba.conf, see http://www.pgpool.net/docs/latest/en/html/client-authentication.html.
Update Client Authentication
Update the pool_hba.conf file recognize all ThingWorx servers.
For ThingWorx requests to PostgreSQL, Pgpool-II will connect to the PostgreSQL nodes using the ThingWorx credentials to maintain established access privileges and restrictions. The pool_hba.conf file uses the same format, authentication methods, and authentication options (md5, trust, and so forth) as pg_hba.conf.
For details about configuring pool_hba.conf, see http://www.pgpool.net/docs/latest/en/html/client-authentication.html.
Configuring Pgpool-II for Failover
In high availability configurations, Pgpool-II operates in an active/passive operation, typically with one active Pgpool-II node and one standby node. Pgpool-II provides the watchdog process to monitor node health and activates the standby node upon failure of the primary node.
Consult the Pgpool documentation for more information and on configuring Watchdog (http://www.pgpool.net/docs/latest/en/html/example-watchdog.html).
The following table contains settings and values to consider when configuring watchdog within Pgpool-II. These settings and values should be added to the pgpool.conf file of each Pgpool-II node.
Setting
Value
Description
use_watchdog
on
Turns on watchdog within Pgpool-II
wd_hostname
'{IP address of this Pgpool-II node}'
Host name or IP address of this watchdog
wd_port
9000
Port number of this watchdog
delegate_IP
'{Virtual IP address to access PostgreSQL}'
Virtual IP address that clients use to access PostgreSQL (through Pgpool-II)
ifconfig_path
/etc/pgpool2
The absolute path of the directory that contains the if_up_cmd andif_down_cmd commands or scripts.
if_up_cmd
'ifup.sh $_IP_$ <eni id of Pgpool node>'
The command issued when Pgpool-II attempts to bring up the virtual IP interface with the delegate_IP address. You can retrieve the eni ID of the Pgpool-II node by logging into the EC2 administration console and going to your Pgpool-II instance. In its description in the console, locate the Network interfaces entry, click eth0, and locate the interface ID. Use it for the $<eni id of Pgpool node>.
if_down_cmd
'ifdown.sh $_IP_$ <eni id of Pgpool node>'
The command issued when Pgpool-II attempts to bring down the virtual IP interface with the delegate_IP address. See if_up_cmd to obtain the eni ID of the Pgpool-II node.
arping_path
/usr/bin
Path of the install iputils-arping package
arping_cmd
arping -U $_IP_$ -w 1
The arping command used to verify the IPs.
heartbeat_destination0
'{IP address of the other Pgpool-II node}'
The IP address against which the heartbeat check is made; the value of the other_pgpool_hostname0 setting
heartbeat_destination_port0
9694
Use the default value.
heartbeat_device
'eth0'
The NIC device for the IP address for heartbeat communication
other_pgpool_hostname0
'{IP address of the other Pgpool-II node}'
The IP address of the other Pgpool-II server instance
other_pgpool_port0
5432
The port to which the other Pgpool-II node listens
other_wd_port0
9000
The port to which the other Pgpool-II watchdog feature listens
Was this helpful?