Parallel Execution in ODI

For a while now we have run into an issue on EPM with regards to FDMEE. The issue has been that the physical ODI topology for the FDMEE data source seems to randomly change the values for the work table prefix settings on Loading and Integration. The values seem to change from the defaults to a setting that would enable parallel execution.


Work Table Prefixes

I am not sure if another set of KM's enable parallel execution, but the environment I have been provided did not include ODI KM's capable of taking advantage of these settings.

First let me start by saying this process is fairly safe and probably should be the standard for KM's. Also this blog was very helpful.

To enable this parallel exports via the prefix we need to add a step to each LKM and IKM.
Add a step

Next we need to define the properties of the step.

  • Name: Parallel Setup
  • Technology: Java BeanShell
  • Command: <? String SESS_NO = odiRef.getSession("SESS_NO"); ?>
    Step Properties

Then the step needs to be moved to be the first step in the KM. This is done by highlighting the step and clicking on the arrows near the "add new step" button.
Move step to top

That's pretty much it. The parallel code will only be utilized if the temporary tables (Loading and Integration) are setup to take advantage of it.

If you are wondering how this works I will attempt to explain in below. On execution the first step fires that saves the ODI Session number to the variable. This session number is unique. Next the KM's create the "staging/work" tables used in the transformation process, when they are created they will use a prefix of C$ or I$ plus the session number (Ex. C$_01234_MyWorkTable.) Since each session will have its own set of tables the interface can be used in multiple simultaneous sessions without conflict.