Tracking Data purging
  • 22 Aug 2023
  • 3 Minutes to read
  • Dark
    Light
  • PDF

Tracking Data purging

  • Dark
    Light
  • PDF

Article Summary

Purging is an essential component for Atomic Scope because it deals with large amounts of data.

The purging process has been improved with each version of Atomic Scope. As a result, it will be more reliable and better performing. So every time, we perform performance and load testing on this new purging implementation and established threshold values.

The primary focus of this analysis and testing is

How effectively can the user use this feature?

The customer has to first comprehend the level of purging recommended by their requirements. So they can select the purging options as per environment.

Important note
In Atomic Scope has purging options in 3 levels

  • Business process
  • Transaction
  • Global level
  • Some transactions have multiple stages with an archived message body. Other transactions in the business process have limited data. Therefore, it is attainable to select to remove certain transactions for the required number of days.
  • A few business processes have multiple transactions with extensive activity tracking.Therefore, it is feasible to choose to purge at the business process level.
  • Global purging will be performed based on overall data count and time duration.
  • Atomic Scope includes the capacity to purge business processes and transactions level as well as global level purging together.
  • If each business process and transaction has its own level of purging. It will delete according to the settings. If no configuration exists. Then it will proceed according to the global purging settings.

How can we establish a benchmark?

Continuous load testing will help to understand the feature's reliability and scalability as well as the overall application.

In this testing, the minimum configuration and data were used. According to the data flow in their environment, the customer can then set the benchmark.

Analyzied and tested metrics are given below:

  1. Total size of one message is 4 KB
  2. 4 stages in one transaction.
  3. 5 messages per second for processing
  4. Total data available in Database is 8 lakhs.

Below are the result of continuous load testing

Data countExecution time(Seconds)
50006.8
50002.4
50003.2
50002.99
50004.8

The purging process will done with every 4 minutes and each cycle will delete 5000 Main activity (Transactions). It will delete related stage activity, archive message body and exception.

Important note
Each cycle will purge 5000 transactions with their related activities. We tested with 1 transaction with associated 10,000 stages. We tested the scalability in this scenario. The purging process proceeds off without a hitch.

Observation of version 8.3 implementation of purging process

  • Total data available in Database id 10 lakhs
  • Version 8.3
  • Total message size is 4MB
  • 4 stages in 1 transaction
Business process level purgingGlobal purging settingsData count for deleting(Every cycle)Execution time(Seconds)
YesYes1000022.89
YesYes50009.2

Observation of version 9.0 implementation of purging process

  • Total data available in Database id 10 lakhs
  • Version 9.0
  • Total message size is 4MB
  • 4 stages in 1 transaction
Business process level purgingTransaction level purgingGlobal purging settingsData count for deleting(Every cycle)Execution time(Seconds)
YesYesYes500026.676
YesYesYes50007.6
NoYesYes500017.757

How can we improve the infrastructure to make the purging process feasible?

  • The execution time will vary depending on the size of the message and the number of associated activities. If the message size is increased, the execution time increases, resulting in a time out exception.

  • To overcome this time-out exception, the customer can upgrade the infrastructure, and Atomic Scope will notify them to know that the purging process hasn't actually taken place. Therefore, upgrading the infrastructure will be simple.

  • The processing will speed up if we increase the infrastructure to handle the massive amounts of data. We can overcome the time out and other purging exceptions.

  • Infrastructure of SQL server includes the Size of RAM, Capacity of the hard drive to store the business transaction and the number of CPU core.

  • According to the message size and data flow, upgrade the infrastructure.

  • If the message body is larger, it can lower the data count for purging. As a result, some data will be deleted. Then it will be easy to manage and purge the data.


Was this article helpful?