Our experience with Connections 5 on IBM PureSystems


The last few weeks we were involved in a customer project which led us to develop IBM Connections pattern for IBM PureSystems. The goal of these pattern was to create a fully automated installation process of an IBM Connections 5 environment which works with / depends on the official IBM pattern for WAS and DB2.


Let’s first skip ahead of the technical part and go directly to the current results:
We created two main pattern for Connections:

  1. Connections system pattern
    which consists of the following parts:
    • IBM Deployment manager pattern
    • IBM DB2 Enterprise HADR Primary pattern
    • IBM HTTP servers pattern
    • IBM Custom nodes pattern (for Connections)
    • IBM Custom nodes pattern (for Connections Content Manager)
    • IBM Core OS pattern (for Security Directory Integrator – formerly known as Tivoli Directory Integrator / TDI)
  2. Connections add node pattern
    which consists of the following parts:
    • IBM DB2 Enterprise HADR Standby pattern
    • IBM HTTP servers pattern
    • IBM Custom nodes pattern (for second Connections cluster node)
    • IBM Custom nodes pattern (for second CCM cluster node)
    • IBM Core OS pattern (for second SDI)

With these we can create a fully automated installation of Connections 5 with Connections Content Manager and optimized WAS configurations and then add additional servers to achieve a high availability configuration for Connections, CCM, DB2 and SDI. Until today this can be realized in as low as 5 ½ hours (systems pattern ~ 3,5h; add node pattern ~ 2h).
In addition it is possible to deploy multiple Connections pattern at the same time (for example for productive, development and testing purposes).

Technical insight

We encountered some interesting challenges on the way. Just to name some of them:

  • Necessary iFix installation for the IBM WAS pattern and the Connections installation
  • Silent configuration of Connections Content Manager
  • DB2 HADR configuration after initial standalone  installation

To be able to avoid these possible problems we created a specific starting and execution order. An example for the system pattern:

  1. DMGR
    1. Set initial environment settings
  2. DB2
    1. Create all DB2 instances and create needed Databases for Connections and CCM
  3. IHS
    1. Install IBM HTTP Server and set configuration
  4. Connections node
    1. Copy DB2 driver
    2. Install WAS iFixes
  5. DMGR
    1. Install WAS iFixes
    2. Install Connections 5
    3. Execute post installation tasks for Connections 5
    4. Install Connections 5 iFixes
  6. Connections node
    1. Adjust JVM settings
    2. Setup Connections search for clustered environments
  7. CCM node
    1. Copy DB2 driver
    2. Install WAS iFixes
    3. Install CCM
    4. Adjust JVM settings
  8. SDI
    1. Install SDI
    2. Customise SDI
  9. DMGR
    1. Execute optional tasks for Connections


For the silent configuration of CCM we had to work with Linux’s “expect” command and pass the needed information to the “createGCD.sh” and “createObjectStore.sh”.
Some code examples:

 echo -e "nnCreating ObjectStore"
  COendpointUrl=$(awk "/>/communities</{getline; print}" $(find ${dmgrPath}/config/cells/ -name LotusConnections-config.xml) | grep -Po "(?<=ssl_href=").*w+")
  cd ${ccmPath}/ccmDomainTool; expect -c "
   set timeout 180
   spawn ./createObjectStore.sh
   expect "Enter the Deployment Manager administrator user ID*" { send "${COadminUser}r" }
   expect "Enter the Deployment Manager administrator password:" { send "${COadminPassword}r" }
   expect "*correct information?*" { send "Yr" }
   expect "Enter group name*" { send -- "r" }
   expect "endpoint URL" { send "${COendpointUrl}r" }
   expect timeout { puts "'expect' timeout reached"; exit }

As already mentioned above another critical process was to enable HADR functionality. We had to switch between different systems to accomplish this. The rough execution order looks like this:

  1. (local execution) Prepare second DB2 node (creating users, instance, …)
  2. (remote execution) Stop all servers on DMGR
  3. (remote execution) Prepare primary DB2 node for HADR and create DB backup
  4. (local execution) Restore databases from primary node on secondary node
  5. (remote execution) Start HADR on primary node
  6. (local execution) Create DB2 High Availability Instance Configuration Utility domain on secondary node
  7. (remote execution) Create DB2 HAICU domain on primary node
  8. (remote execution) Start all servers on DMGR

After all these steps the DB2 HADR is up and running and is instantly available in the Connections 5 environment.


It was a really fun experience working on this project and with the PureSystem environments. Let’s hope we can get some more of these opportunities in the future!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.