1-800-MELISSA

Tips & Tricks

Improving Throughput with Multiprocessing in SSIS

By Oscar Li,
Product Channel Manager


Did you know you can multiprocess your records by initiating multiple instances of our components? This guide will help you improve your throughput with our on-premise processing mode using SSIS.

Web services may be multithreaded already; using our advanced configuration to set the number of threads. Our on-premise components (when the processing mode is set to On-Premise) will process the procedures one at a time. However, in SSIS, it’s very easy to set up parallelism in order to scale up and increase processing speed.

  1. First, we will need to have a way to incrementally assign a unique sequential integer for every record. You have two options, you may use a unique ID that is automatically generated by SQL (you can import that into use), or you can use the script component to generate a unique number on the fly. Script component is a native component of the SSIS toolbox.
    In this example, we will be using a script component to add a column to each record containing a unique incremental ID.
    Within the “Script Component,” I create a new output column called RowCountID which stores a unique ID for each record. And then I script it this way:

  2. Next, we will need to setup a conditional split which is also an SSIS internal component. Use the Conditional Split transform to take the RowCountID and split the records into three groups using a modulo operator:
    If you need to initialize more than 3, you simply need to increase the modulo and the corresponding number of Cases.



  3. Now, we set up MD components with all the mappings and settings. Once we are done, make 3 copies of the component by selecting the component (Ctrl C + Ctrl V) twice more. By copying the finished component, all settings and mappings should be copied as well. Next, map the condition split outputs into every component. The number of available outputs should match the number of components we have initialized.

    We should see something like this:



  4. For the last step, take all the outputs from the components and funnel them into the Union All component (Native). This will recombine the records into one stream so they’re ready for their destination. An additional note, a solid state hard drive is beneficial for speed and will increase performance as our objects (address/phone/email/name) refer to data files that are installed in your Program Files\Melissa Data folder. This will ensure that our components do not wait on receiving records in order to process.



Conclusion/Summary:

I have three instances of Contact Verify set to run in parallel, thus potentially tripling the processing speed. What’s happening here is that I use a Script Component to generate an increasing, unique ID for each record and store it into a variable called RowCountID. Then, following that, I use the Conditional Split transform to take the RowCountID and split the records into three groups using a modulo operator. This will allow 3 streams for our contact verify components to process in parallel. Once processing completes, we take all 3 streams back into one with the union all component and from there we enter it into our destination.



Optimization Tips: