8. Install and configure Pgpool
Installation planning:
Pgpool will be responsible for the following organization of PostgreSQL:
Direct client (ThingWorx) write traffic to the PostgreSQL master node.
Direct client read traffic to the PostgreSQL standby node.
Managing all PostgreSQL nodes during different fail-over scenarios.
Therefore, it is critical to establish a trust relationship between Pgpool-II nodes and PostgreSQL nodes.
In this example, the approach to establish the trust relationship is:
Install Postgresql component in Pgpool node and the postgres user will be created too.
Setup the same password for postgres user across all nodes: pgpool master node, pgpool slave node, postgresql node0, postgresql node1 and postgresql node2
Create the ssh key on the Pgpool master node and deploy the public key to all nodes.
Deploy the private key to Pgpool standby node too.
Setup a trust relationship for user postgres across all nodes.
Major steps to achieve the above trust:
ssh-keygen to generate idrsa and idrsa.pub on Pgpool master node for user postgres.
Copy content of idrsa.pub to ~/.ssh/authorizedkeys of postgres user on all nodes.
chmod 700 ~/.ssh on all nodes for user postgres.
vim /etc/ssh/sshd_config with the following content:
PermitEmptyPasswords yes
PasswordAuthentication yes
Change the postgres user password to the same value on all nodes.
Test ssh postgres@remote_IP to verify that all Pgpool-II nodes can authenticate into all PostgreSQL nodes using the ssh-key.
Installation of Pgpool
On RHEL, Pgpool can be installed with the following commands:
sudo rpm -ivh http://www.pgpool.net/yum/rpms/3.6/redhat/rhel-6-x86_64/pgpool-II-release-3.6-1.noarch.rpm
sudo yum -y install pgpool-II-pg94
sudo yum -y install pgpool-II-pg94-debuginfo
sudo yum -y install pgpool-II-pg94-devel
sudo yum -y install pgpool-II-pg94-extensions
sudo yum -y install postgresql-contrib
Configuration of Pgpool
Perform these steps on each Pgpool-II node.
Step 1
Update pgpool.conf.
Login as user pgpool.
Edit pgpool.conf:
vim /etc/pgpool-II/pgpool.conf
with the following content:
Server Node
Configuration Item
Configuration Value
Node
1
listen_address
*
Depending on security requirement, you might modify this to specific IP address.
2
port
5432
3
pcp_listen_addresses
*
Depending on security requirement, you might modify this to specific IP address.
4
pcp_port
9898
5
backend_hostname0
'10.91.9.200'
6
backend_port0
5432
7
backend_weight0
1
8
backend_data_directory0
'/db/postgres'
9
backend_flag0
ALLOWTOFAILOVER
10
backend_hostname1
'10.91.9.24'
11
backend_port1
5432
12
backend_weight1
1
13
backend_data_directory1
'/db/postgres'
14
backend_flag1
'ALLOWTOFAILOVER'
15
backend_hostname2
'10.91.9.41'
16
backend_port2
5432
17
backend_weight2
1
18
backend_data_directory2
'/db/postgres'
19
enable_pool_hba
on
20
master_slave_mode
on
21
master_slave_sub_mode
'stream'
22
sr_check_period
10
23
sr_check_user
'replicator'
24
sr_check_password
'replicator'
25
sr_check_database
'postgres'
26
health_check_user
'postgres'
27
health_check_password
'postgres'
28
health_check_database
'postgres'
29
failover_command
'/etc/pgpool-II/failover.sh %d %h %D %m %H %R %M %P'
* 
Parameters for watchdog configuration are not yet included in this example.
Step 2
Create the failover script.
vim /etc/pgpool-II/failover.sh
with the following content:
#!/bin/bash -x
echo ------------------------Begin failover------------------------
#------------------------------------
# Inputs
#------------------------------------
FAILED_NODEID=$1 # %d = node id
FAILED_HOSTNAME=$2 # %h = hostname
FAILED_DBPATH=$3 # %D = database cluster path
NEW_MASTER_NODEID=$4
NEW_MASTER_HOSTNAME=$5
NEW_MASTER_DBPATH=$6
OLD_MASTER_NODEID=$7
OLD_PRIMARY_NODEID=$8
# %m = new master node id
# %H = new master host name
# %R = new master database cluster path
# %M = old master node id
# %P = old primary node id
#------------------------
# Nodes
#------------------------
PGDATA=/db/postgres
PGBIN=/usr/pgsql-10.x/bin
LOCALBIN=/db/bin

NODE0_HOSTNAME=10.91.9.200
NODE0_ARCHIVE=/db/node0archive
NODE1_HOSTNAME=10.91.9.24
NODE1_ARCHIVE=/db/node1archive
NODE2_HOSTNAME=10.91.9.41
NODE2_ARCHIVE=/db/node2archive

if [ $FAILED_NODEID = $OLD_PRIMARY_NODEID ]; then


if ssh -T postgres@$NEW_MASTER_HOSTNAME grep -q "$FAILED_HOSTNAME" /db/postgres/recovery.conf; then
#yes, it is, go ahead to promote
echo Node $NEW_MASTER_HOSTNAME can be promoted safely
else
#otherwise, new master has to be switched.
if [ $FAILED_NODEID = 1 ]; then
if [ $NEW_MASTER_NODEID = 0 ]; then
NEW_MASTER_HOSTNAME=$NODE2_HOSTNAME
NEW_MASTER_NODEID=2
else
NEW_MASTER_HOSTNAME=$NODE0_HOSTNAME
NEW_MASTER_NODEID=0
fi;
fi;
if [ $FAILED_NODEID = 2 ]; then
if [ $NEW_MASTER_NODEID = 0 ]; then
NEW_MASTER_HOSTNAME=$NODE1_HOSTNAME
NEW_MASTER_NODEID=1
else
NEW_MASTER_HOSTNAME=$NODE0_HOSTNAME
NEW_MASTER_NODEID=0
fi;
fi;
if [ $FAILED_NODEID = 0 ]; then
if [ $NEW_MASTER_NODEID = 1 ]; then
NEW_MASTER_HOSTNAME=$NODE2_HOSTNAME
NEW_MASTER_NODEID=2
else
NEW_MASTER_HOSTNAME=$NODE1_HOSTNAME
NEW_MASTER_NODEID=1
fi;
fi;

echo New master has been switched to $NEW_MASTER_HOSTNAME
fi;


TARGET_HOSTNAME=0.0.0.0
TARGET_ARCHIVE=/db/archive
STANDBY1_HOSTNAME_TO_RETARGET=0.0.0.0

if [ $FAILED_NODEID = 0 ]; then
if [ $NEW_MASTER_NODEID = 1 ]; then
TARGET_HOSTNAME=$NEW_MASTER_HOSTNAME
TARGET_ARCHIVE=/db/node1archive
STANDBY1_HOSTNAME_TO_RETARGET=$NODE2_HOSTNAME
else
TARGET_HOSTNAME=$NEW_MASTER_HOSTNAME
TARGET_ARCHIVE=/db/node2archive
STANDBY1_HOSTNAME_TO_RETARGET=$NODE1_HOSTNAME
fi;
fi;

if [ $FAILED_NODEID = 1 ]; then
if [ $NEW_MASTER_NODEID = 0 ]; then
TARGET_HOSTNAME=$NEW_MASTER_HOSTNAME
TARGET_ARCHIVE=/db/node0archive
STANDBY1_HOSTNAME_TO_RETARGET=$NODE2_HOSTNAME
else
TARGET_HOSTNAME=$NEW_MASTER_HOSTNAME
TARGET_ARCHIVE=/db/node2archive
STANDBY1_HOSTNAME_TO_RETARGET=$NODE0_HOSTNAME
fi;
fi;

if [ $FAILED_NODEID = 2 ]; then
if [ $NEW_MASTER_NODEID = 0 ]; then
TARGET_HOSTNAME=$NEW_MASTER_HOSTNAME
TARGET_ARCHIVE=/db/node0archive
STANDBY1_HOSTNAME_TO_RETARGET=$NODE1_HOSTNAME
else
TARGET_HOSTNAME=$NEW_MASTER_HOSTNAME
TARGET_ARCHIVE=/db/node1archive
STANDBY1_HOSTNAME_TO_RETARGET=$NODE0_HOSTNAME
fi;
fi;

ssh -T postgres@$NEW_MASTER_HOSTNAME $PGBIN/pg_ctl promote -D $PGDATA
ssh -T postgres@$STANDBY1_HOSTNAME_TO_RETARGET $LOCALBIN/retargetMaster.sh $TARGET_HOSTNAME $TARGET_ARCHIVE

exit 0;
else
TARGET_HOSTNAME=0.0.0.0
TARGET_ARCHIVE=/db/archive
NEW_STANDBY_HOSTNAME=0.0.0.0
if [ $OLD_PRIMARY_NODEID = 0 ]; then
TARGET_HOSTNAME=$NODE0_HOSTNAME
TARGET_ARCHIVE=$NODE0_ARCHIVE
if [ $FAILED_NODEID = 1 ]; then
NEW_STANDBY_HOSTNAME=$NODE2_HOSTNAME
else
NEW_STANDBY_HOSTNAME=$NODE1_HOSTNAME
fi
fi;
if [ $OLD_PRIMARY_NODEID = 1 ]; then
TARGET_HOSTNAME=$NODE1_HOSTNAME
TARGET_ARCHIVE=$NODE1_ARCHIVE
if [ $FAILED_NODEID = 2 ]; then
NEW_STANDBY_HOSTNAME=$NODE0_HOSTNAME
else
NEW_STANDBY_HOSTNAME=$NODE2_HOSTNAME
fi
fi;
if [ $OLD_PRIMARY_NODEID = 2 ]; then
TARGET_HOSTNAME=$NODE2_HOSTNAME
TARGET_ARCHIVE=$NODE2_ARCHIVE
if [ $FAILED_NODEID = 0 ]; then
NEW_STANDBY_HOSTNAME=$NODE1_HOSTNAME
else
NEW_STANDBY_HOSTNAME=$NODE0_HOSTNAME
fi
fi;

ssh -T postgres@$NEW_STANDBY_HOSTNAME $LOCALBIN/retargetMaster.sh $TARGET_HOSTNAME $TARGET_ARCHIVE

exit 0;
fi;
echo 'Did not complete failover'
echo -------------------End failover---------------
exit 0;
Make the script executable:
chmod a+x /etc/pgpool-II/failover.sh
Step 3
Update pcp.conf.
vim /etc/pgpool-II/pcp.conf
with the following content:
# Lines beginning with '#' (pound) are comments and will
# be ignored. Again, no spaces or tabs allowed before '#'.postgres:e8a48653851e28c69d0506508fb27fc5
# USERID:MD5PASSWD
The default password for PostgreSQL is 'postgres', so you should update it.
Step 4
Update pool_hba.conf.
vim /etc/pgpool-II/pool_hba.conf
with the following content:
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 10.91.0.0/16 trust
You can adjust it based on your security requirements.
Was this helpful?