Achieving Business Scalability with Liferay DXP Unicast Clusters

In a dynamic business ecosystem, scalability has transformed from a desirable luxury to an essential necessity. Every enterprise is in an endeavor to keep their business process up and running at all points. In a world of 24×7 connectivity, any business, irrespective of the industry or segment, cannot afford to stay out of touch.
Business scalability in modern times is not about growth but also sustenance. Modern technologies allow enterprises to keep operations rolling. Scalability enables businesses to expand the scope of operations, balance business processes without excessive loads and ensure business critical apps are always available.
At Azilen, we are always endeavoring to make businesses future-ready by making use of advanced technologies. Facilitating scalability and growth is one of our core goals while creating any solution for our client. In this blog, we wish to cover the story of how we endowed an advanced digital risk management platform meant for FinTech industry with scalability potential with the help of Liferay.
More information about the Risk Management Solution can be found in our detailed case study, here.
The client was seeking a solution for risk modeling and mitigation that is capable of monitoring real-time data and conduct insightful analysis. Naturally, zero downtime was one of the most important requirements for such a business-critical solution. Also, to cater to the dynamically growing financial world, the solution was required to be extremely scalable without any limitations.
Azilen chose to leverage the continuity and scalability benefits offered by Liferay and used Liferay DXP platform for creating an advanced platform powered by unicast clustering for superior scalability.
Paving way for Scalability with Liferay DXP Unicast Clustering
Being the latest version of Liferay web app development framework, Liferay DXP is an advanced platform for the development of enterprise portals and web apps. Liferay DXP is known for creating next-gen digital experience platforms for internal and external processes.
Clustering is an advanced process that is used in Liferay DXP for connecting two different nodes together in a way that they behave like a single application. Clustering promotes parallel processing, load balancing and fault tolerance that ensures availability, scalability and load balancing of different web apps.
Clustering in Liferay DXP is suitable for websites that attract high-traffic and web applications that need to be online for all times.
Clustering eliminates the chance of server overloading and unresponsiveness making an app, platform or website resistant to failures and unavailability.
Building Scalable Apps with Liferay DXP Clusters
At Azilen, we believe in empowering enterprises with next-level scalability and make use of modern technologies like Liferay DXP Unicast Clustering.
Leveraging unicast clustering for building solutions that work on multiple servers at the same time, we ensure that an application keeps running always. In case any server is suffering from downtime, the app switches to another server automatically, ensuring continuity and scalability.
With DXP Unicast Clustering in place, enterprises can ensure that business processes aren’t affected by server downtime at any point during the app lifetime. At Azilen, we use a structured approach to build a Unicast cluster setup in Liferay DXP. This walkthrough has been written for Liferay v7.1:
Once the developers agree on applying cluster setup, the process of setting the load balancer begins. Also, important decisions regarding the following things are taken:
1.Load Balancer
Load balancer is essential to share the load and traffic on web servers. There are fundamentally two types of load balancers, namely, software load balancer and hardware load balancer. Talking of a specific case, we chose hardware load balancer for balancing server requests.
2.Web Servers
For controlling HTTP server requests and static content like images, CSS, etc. quickly, a decision in favor of web servers is taken. While configuring web servers, the following things are required to be managed appropriately:
a. Sticky Sessions
b. Mod_jk configuration:
For the same, two things need to be configured singularly:
i.mod_jk.conf
#Load mod_jk module LoadModule jk_module modules/mod_jk.so # Where to find workers.properties JkWorkersFile /etc/httpd/conf/workers.properties JkShmFile /var/run/httpd/mod_jk.shm # Add jkstatus for managing runtime data JkMount /jkmanager/* jkstatus <Location /jkmanager> JkMount jkstatus Order deny,allow </Location> # Mount all request to be routed to the liferaycluster worker JkMount /* liferaycluster
ii.workers.properties file
worker.list=liferaycluster,jkstatus # Virtual workers worker.jkstatus.type=status worker.liferaycluster.type=lb # Basic Defaults (references by liferaycluster workers) worker.basic.port=8009 worker.basic.type=ajp13 worker.basic.connection_pool_timeout=600 worker.basic.socket_keepalive=True worker.basic.socket_connect_timeout=2500 worker.basic.lbfactor=1 worker.basic.reply_timeout=120000 # Worker 1 worker.worker1.host=<Node1-Hostname> worker.worker1.reference=worker.basic # Worker 2 worker.worker2.host=<Node2-Hostname> worker.worker2.reference=worker.basic # JK Status worker worker.jkstatus.read_only=True # Associate "real" workers with virtual liferaycluster worker worker.liferaycluster.balance_workers=worker1,worker2 worker.liferaycluster.sticky_session=True worker.liferaycluser.max_reply_timeouts=4
3. Application configuration:
Apart from Liferay configuration we need to configure few things in tomcat as well in order to do make cluster work.
a. web.xml
In web.xml we need to provide distributable tag in order to know tomcat that we want to achieve session replication.
Add <distributable/> tag in tomcat web.xml file just before </web-app>.
b. Server.xml
There will be 2 things that we need to keep in mind while configuring server.xml to achieve cluster.
i. Worker name in engine tag
We need to provide worker name that we have defined in web server her in Engine tag.
<Engine name="Catalina" defaultHost="localhost" jvmRoute="worker1">
ii. Add cluster tag with all the details of receiver and sender member
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="6"> <!--Manager className="org.apache.catalina.ha.session.BackupManager" expireSessionsOnShutdown="false" notifyListenersOnReplication="true" mapSendOptions="6"/--> <Manager className="org.apache.catalina.ha.session.DeltaManager" expireSessionsOnShutdown="false" notifyListenersOnReplication="true"/> <Channel className="org.apache.catalina.tribes.group.GroupChannel"> <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" address="<Current Node-IP>" port="4444" autoBind="100" selectorTimeout="5000" maxThreads="6"/> <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter"> <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender" timeout="60000" keepAliveTime="10" keepAliveCount="0"/> </Sender> <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor" staticOnly="true" /> <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/> <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor"/> <Interceptor className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor"> <Member className="org.apache.catalina.tribes.membership.StaticMember" host="<Node2-IP>" port="4444" uniqueId="{1,2,3,4,8,1,5,2,6,0,1,0,0,0,0,9}(Unique for each and every node)" /> </Interceptor> </Channel> <Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=".*\.gif|.*\.js|.*\.jpeg|.*\.jpg|.*\.png|.*\.htm|.*\.html|.*\.css|.*\.txt"/> <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/> <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/> </Cluster>
c. Tcp.xml
Extract tcp.xml from $liferay_home/osgi/marketplace/Liferay Foundation.lpkg/com.liferay.portal.cluster.multiple-[version number].jarand place it in an appropriate location. Open tcp.xml file and add following property in TCP tag of tcp.xml file
singleton_name="GIVE_ANY_NAME" <TCPPING timeout="3000" initial_hosts="${jgroups.tcpping.initial_hosts:IP_ADDRESS_NODE1[7800], IP_ADDRESS_NODE2[7800]}" port_range="1" num_initial_members="10"
Note: IP_ADDRESS_NODE1 & IP_ADDRESS_NODE2 are IP addresses of the server
Open <LIFERAY_HOME>\<TOMCAT_SERVER>\bin\setenv.bat(for windows)
Open <LIFERAY_HOME>\<TOMCAT_SERVER>\bin\setenv.sh(for linux)
Add the following line:
JAVA_OPTS="$JAVA_OPTS – Djgroups.bind_addr=IP_ADDRESS_CURRENT_NODE" JAVA_OPTS="$JAVA_OPTS -Djgroups.tcpping.initial_hosts= IP_ADDRESS_CURRENT_NODE [7800], IP_ADDRESS_SECOND_NODE [7800]"
4. Liferay configuration:
Basic configuration is done. Now we need to think about many other aspects when liferay comes into the picture
a. Data folder configuration
Data folder for both the nodes should be shared for proper cluster configuration. Data folder configuration can be done by creating config file:
“com.liferay.portal.store.file.system.configuration.FileSystemStoreConfiguration.cfg” in [Liferay Home]/osgi/configs directory
Specify mounted path for your shared file system in property as below:
rootDir = /opt/liferay/data/document_library
b. Database configuration
While thinking about the database, one needs to ensure that same database is being pointed for all the nodes. Clustered database can also be configured for ensuring high availability.
c. Index configuration
Liferay DXP recommends that one should put search engine as a separate setup in order to achieve scalability. You can either use solr or elastic search in order to store your indexes.
d. Portal-ext properties
We need to define below properties for ensuring cluster works appropriately:
cluster.link.autodetect.address=USWOWLDEVDB01.RMS.COM:1433 cluster.link.enabled=true ehcache.cluster.link.replication.enabled=true cluster.link.channel.properties.control=myehcache/tcp.xml cluster.link.channel.properties.transport.0=myehcache/tcp.xml portlet.session.replicate.enabled=true
Once all these steps and configuration are complete, you are done with Cluster setup. Once you end up setting up the unicast cluster, test your setup by logging on one note and shutting down the same. Click on another page to see if the session gets replaced.
Using the above technique, developers can set up Unicode clusters to promote scalability in enterprise apps, web platforms and websites.
Wrapping Up
Scalability is an essential component of any enterprise app. In a dynamic business environment, businesses cannot think of achieving its growth goals without scalable web apps and platforms. Using Liferay DXP for your business app, specialized Unicode clusters can be created which will ensure your business-critical solutions are always accessible and ready to cater to changing business requirements.
Looking to build a scalable web app for your business? Get in touch with our Product Development Expert now!