Wednesday, April 30, 2014

Understanding the need of Whole Server Migration in Oracle Fusion Middleware

Whole server migration (WSM) is the mechanism built into Oracle WebLogic server that makes it possible to move an entire server instance, along with all of the services it owns, to a different physical machine when triggered by a platform failure event. WSM is probably overkill for most service requirements, because in the vast majority of cases, services can be set up to run on all managed server instances in a cluster, and failover of services from one server to another is transparent and automatic without the need to migrate an entire server instance to another platform. A few critical services, however, with JMS and JTA transactions being the prime examples, have a different relationship with managed server clusters.  These services cannot be pulled out of a failed server instance and restarted in a different operational server instance; the server instance, IP address and all, has to be migrated together with these special services.  Such services normally fall under the purview of SOA managed servers.

Architects and the administrators of the HA design should be aware of the need of WSM based on the needs of the services that are being designed at the client. The following article tries to explain the need of the WSM for various service scenarios and needs. This would probably give some ( if not complete) fair idea on whether to consider setting up the Server migration or not. 

Ability to recover Instances using service engine:

 WSM is not needed ( unnecessary and costly) whenever the instances can be recovered by the service engine ( lets say BPEL). Recovery settings in EM can be set to automatically or manually recover instances at a scheduled interval or at system/server start. Hence Its important to know when an instance can / can not be recovered. In simple an instance can be recovered if it is dehydrated ( persisted in database) before it failed.
 Consider a asynchronous BPEL process 'A' with default transactional and onewaydeliverypolicy ie., "requiresNew" and "async.persist" set. Before the instance is created the message is persisted into the dlv_message table with in the parent transaction if one existed. Because this message is persisted into the database any failure / rollback of the instance would make it still possible to recover.


Ability to "Fail over":
Homogeneous services that are deployed cluster wide will be transparently be able to serve the requests from either of the two nodes in case any service on a node fails or goes down.
Consider a scenario where Service A calls Service B synchronously. Lets say based on round robin approach the Service B of Server 2 is given a chance to serve the request. While the instance is in progress if the service B fails for some reason ( service issues or server shutdown), the transaction is rolled back. Upon retrial it would be assigned the Server1's Service B and hence the transaction is smoothly fulfilled. This is called fail over. In case if the service B is asynchronous you may need to recover it manually from service engine console as explained above.
Server migration is not really needed for these services as they are homogeneous ( not heterogeneous or pinned to a particular server) and hence fail over with some disciplined retry and recovery methods would be sufficient.

Purpose of the Server Migration:
We do not need whole server migration if your service fall in either of above categories ( ability to recover or ability to fail over) However if your process is 

synchronous or Asynchronous  with oneWayDeliveryPolicy is set to 'sync' ( as below)
  <property name="bpel.config.oneWayDeliveryPolicy" type="xs:string"                   many="false">sync</property>
the bpel will participate in the same thread as the caller with out having the message persisted before an instance is created. In these situations a roll back of the instance would leave no trace of the message in the database and hence leaving no possibility to recover them using service engine. If your service is homogeneous a retry of the service ( by using retry option on the partner link or adapter) may kick fail over(ie., kick of the service from other node as explained in prev section). However if your service can not be failed over as the services are pinned to a specific server then the only option to recover them is to bring the server up in the same node or in the other node (server migration). Lets see this practically from an example.

Important Note : If the BPEL is
Asynchronous with oneWayDeliveryPolicy set to 'async.cache' ( as below)
  <property name="bpel.config.oneWayDeliveryPolicy" type="xs:string"                   many="false">async.cache</property>
the message is stored in in-memory cache and it will vanish upon server failure. As the message is not persisted there cant be any recovery to act upon. In this case the new thread and new transaction are just created , not persisted anywhere if the instance is roll backed for any reason , you have neither a chance to recover from BPEL engine nor using Whole server migration. That is because your thread and transaction are rolled back thus losing the message.So be careful when you design the service with async.cache.

Whole Server Migration Example : 
A sample Service Design that leaves no choice (to recover) other than Server migration.


Background :
  • WLS_SOA1 , WLS_SOA2 are two managed servers on machines SOAHOST1,SOAHOST2 clustered ( SOA_Cluster) in a weblogic domain with SOA installed and configured on them.
  • The transaction and jms logs / persistent stores are stored at common location accessible to both these machines.
  • Whole server migration is NOT setup.
  • DemoOutQueue,DemoInQueue are Uniform Distributed Queues. 
  • Service A is an asynchronous service with onewaydleiverypolicy set to 'sync'. It consumes messages placed on the DemoOutQueue and places it onto the DemoInQueue. A delay of approx 30 secs is introduced between consuming and producing using some for loops. It is deployed on the SOA Cluster.
  • Auto recovery on start up and for scheduled intervals are set to off.

Experiments :

Test 1:
Action 1 : 
Five custom messages ( compatible with the jms adapter design) with id's 1,2,3,4,5 were placed on to the DemoOutQueue.
Wait for 15 seconds and shutdown the WLS_SOA2 server.

Observation for Action 1:
Messages 1,3,5 are processed successfully by WLS_SOA1.
Messages 2,4 went to Non Recoverable state as the server WLS_SOA2 processing them was abruptly shutdown.
In few seconds after the WLS_SOA2 is shutdown, the messages 2.4 are picked up by the Service A of WLS_SOA1 and were processed successfully.
All messages 1,2,3,4,5 were placed in DemoInqueue.

Test 2: Repeat the TEST Case 1 again with different message id's for clarity and no confusion.

Action 1 : 
Five custom messages ( compatible with the jms adapter design) with id's 6,7,8,9,10 were placed on to the DemoOutQueue.
Wait for 15 seconds and shutdown the WLS_SOA2 server.
Observation for Action 1:
Messages 6,8,10 are processed successfully by WLS_SOA1.
Messages 7,9 went to Non Recoverable state as the server WLS_SOA2 processing them was abruptly shutdown.
Wait for some time to see if anything happens.
Waited but nothing else happened.
Only messages 6,8,10 were placed in DemoInqueue. 

Action 2: 
Start the WLS_SOA2 server.
Observation for Action 2: 
Messages 7  was picked up by the Service A on WLS_SOA1 and Message 9 was picked up by the Service A on WLS_SOA2 and were processed successfully by placing them on to the DemoInQueue.

Note : The results of Test 2 may be just like Test 1's shown above. You may need to run for a good number of times to get the result of Test 2 as shown above. vice versa..ie., The results of Test 1 above may be like Test 2 as shown above and you may need to run considerable amount of times to get what we got above. But if your web logic cluster is round-robin load balanced you are most probably going to see the results in just 2 runs.

Quick understanding of the UDQ behaviour in cluster before we jump on to the analysis:

Uniform distributed queue is a queue distributed across the different servers cluster wide ie., for an client (internal or external) the queue appears as one lets say DemoOutQueue. However internally the DemoOutQueue UDQ is/are two queues each targeted to JMS servers on WLS_SOA1 and WLS_SOA2,
lets say DemoOutQueue1 ,DemoOutQueue 2 respectively for simplicity.You can see this in Admin console -> JMS Modules - Distributed Queue -> Monitoring

When a message/ set of messages are placed on a UDQ ( here DemoOutQueue) , the messages are placed either on DemoOutQueue1 or DemoOutQueue2 but not on both for Data consistency. This is called the singleton or pinned behaviour of the JMS.

Analysis :

Test 1 - Observation 1 - Reasoning :  Messages 1,2,3,4,5 when placed on DemoOutQueue, the cluster load balancing chose DemoOutQueue1(ie., DemoOutQueue hosted on JMSServer of WLS_SOA1) and placed all the messages on DemoOutQueue1.
Service A of both WLS_SOA1 and WLS_SOA2 servers( lets call ServiceA1,ServiceA2 for simplicity) picked the messages in the round robin fashion ie., ServiceA1 picked 1,3,5 and ServiceA2 picked 2,4 and created instances respectively.When the WLS_SOA2 is shutdown, the Service A2 instances for messages 2,4 were rolled back ( as they were not yet finished with processing) and as the Service A is sync, the messages rolledback to the original queue from where they have come from. In this case it is DemoOutQueue1.However as still WLS_SOA1 is up and running the Service A on WLS_SOA1 ie., ServiceA1 picks these new messages 2,4 and processes them. Hence we see all the messages were successful.

Test 2 - Observations 1,2 - Reasoning : Messages 6,7,8,9,10 this time were placed on DemoOutQueue2 ( ie, DemoOutQueue of JMS Server targeted to WLS_SOA2). ServiceA1 picked 6,8,10 messages. Service A2 picked 7,9 messages and instances were created respectively. However as WLS_SOA2 was shutdown the instances being processed by ServiceA2 ie., for messaged 7 and 9 were abruptly rolledback to the DemoOutQueue2. But now as the WLS_SOA2 is shutdown any consumers of DemoOutQueue2 will not get any messages as DemoOutqueue2 is literally not there ( down because of its server shutdown) for giving the messages. Hence nothing happens.
When the Server WLS_SOA2 comes back the messages 7,9 are picked by ServiceA1,ServiceA2 respectively in round robin fashion and are processed successfully placing the messages on to the DemoInQueue.

Quick Summary : As you can now understand, if there are any messaged placed on to the queue that was targeted to a server that went down, those messaged are not available to be failed over. Only solution is to get the queue up. The only possible approach for this is either get the Server up or get it migrated in case of multiple unsuccessful retrials on a particular node. 

Conclusion : 
Whole server migration involves a tightly-orchestrated sequence of events with participation from WebLogic, Coherence, O/S, network, and database components gets triggered if a SOA server goes down in a Fusion Applications infrastructure. This is costly , time consuming and also needs an extensive amount of hardware sizing and capacity planning to host the migrated server. One needs to really see the possibility of making things work with out WSM by designing services to fall under recovery of service engines or fail over scenarios. However if you extensively need to use JMS distributed destinations and XA transactions spanning multiple resources ( multiple databases) on a two phase commit then WSM is the way to go.

Note : Service Migration is another possible approach but it is not yet supported with Oracle BPEL/OSB. Refer articles NOTE:1306533.1 and NOTE:1407715.1.For service level migration support on SOA, there is a enhancement requests (SOA 11g: BUG:13447082) raised and for OSB (OSB 11g: BUG:13446665) as well. There are still under development and will be included in future releases.

This is still only one use case explaining the need for the Whole Server Migration where JMS services are pinned to server. In my next blog I shall talk more about JTA recovery service use cases.

Disclaimer: The above experiments are based on my own observations and setup of the environment on my local machines. A possibility of a different behaviour may be possible in a real production infrastructure, although highly unlikely to differ from this.

References:
http://docs.oracle.com/cd/E14571_01/web.1111/e13709/migration.htm
http://www.ateam-oracle.com/floating-ip-addresses-and-whole-server-migration-in-fusion-applications/#comment-98
https://community.oracle.com/thread/3527201
Oracle EDG
https://blogs.oracle.com/soabpm/entry/soa_suite_11g_-_transactions_b_1

7 comments:

  1. Awesome article. Great Work !!

    ReplyDelete
  2. Magnificent explanation. Regards

    ReplyDelete
  3. I like the explanation over Purpose of the Server Migration I learnt new things here.Thank you.The Lucidtechsystems institute providing quality weblogic training and Oracle SOA training in Hyderabad through online and classroom.We suggest our middle-ware students to follow your post.

    ReplyDelete
  4. Quite helpful!Thanks for sharing the information.Keep updating good stuff...

    weblogic administration training

    ReplyDelete
  5. I like your article,Your article is different.See more details please visit our link.
    Oracle Fusion SCM Training in Hyderabad

    ReplyDelete
  6. Very nice blog and articles. I am really very happy to visit your blog. Now I am found which I actually want. I check your blog everyday and try to learn something from your blog. Thank you and waiting for your new post.

    Digital Marketing Training in Chennai

    Digital Marketing Course in Chennai



    ReplyDelete