Managing SQS or Kafka Events for Optimal Performance

Please note that this is applicable for CRM updates only (not DWH updates).

To ensure optimal performance and avoid congestion in the queue system, it is essential that SQS or Kafka events do not surpass a threshold of 50,000 messages per hour.  Exceeding this limit can lead to an undesired buildup within the queue. Should the volume of messages exceed the aforementioned limit, our system will transition into a queuing mode. Here, it follows the FIFO (First In, First Out) principle to process messages. This means messages that arrived first will be dealt with first, potentially leading to delays in processing subsequent messages. In scenarios requiring larger updates that might surpass the 50,000 message cap, alternative methods should be employed. We suggest utilising Kafka or SQS for conducting bulk updates.  Alternatively, consider using the REST API for batch imports or send-outs, as these options are designed for handling larger volumes of data, enabling more efficient data management and minimising potential bottlenecks to ensure smooth system operations.  Remember, it's always best to use the most fitting solution to match your needs for optimal system performance.

Was this article helpful?
0 out of 0 found this helpful