Adventures in Groovy – Part 22: Looping Through Member Descendants

Print
There are a lot of reasons one might loop through children in a Groovy Calculation.  On my journeys with Groovy, I have run into a few roadblocks where this has been helpful.  Some of these were related to limits in PBCS.  Looping through sets of members allowed us to get around some of the limitations.

  • Running Data Maps and Smart Pushes have limits on the amount of data they can push when executed from a business rule (100MB).
  • Using the clear option on Data Maps and Smart Pushes has limits on the length of the string it can pass to do the ASO clear (100000 bytes).
  • The Data Grid Builders have limits on the size of the data grid that can be created (500,000 cells before suppression).

All 3 of these situations create challenges and it is necessary to break large requests into smaller chunks.  An example would be running the methods on one entity at a time, no more than x products, or even splitting the accounts into separate actions.

Possibilities

Before going into the guts of how to do this, be aware that member functions in PBCS are available.  Any of the following can be used to create a list of members that can be iterated through.

  • Ancestors (i)
  • Children (i)
  • Descendants (i)
  • Children (i)
  • Siblings (i)
  • Parents (i)
  • Level 0 Descendants

More complex logic could be developed to isolate members.  For example, all level 0 descendants of member Entity that have a UDA of IgnoreCurrencyConversion could be created.  It would require additional logic that was covered in Part 11, but very possible.

Global Process

In this example, Company was used to make the data movement small enough that both the clear and the push were under the limits stated above.  The following loops through every Company (Entity dimension) and executes a Data Map for each currency (local and USD).

On Form Save

When there are large volumes of data that are accessed, it may not be realistic to loop through all the children.  In the case of a product dimension where there are 30,000 members, the overhead of looping through 30,000 grid builds will impact the performance.  However, including all the members might push the grid size over the maximum cells allowed.  In this case, the need to iterate is necessary, but the volume of products is not static from day to day.  Explicitly defining the number of products for each loop is not realistic.  Below creates a max product count and loops through all the products in predefined chunks until all 30,000 products are executed.

A Wrap

So, the concept is pretty simple.  Now that we have the ability to do things like this outside of typical planning functions really opens up some possibilities.  This example ran post-save, but what about pre-save?  How about changing the color of cells with certain UDAs?  What about taking other properties managed in DRM that can be pushed to Planning that would be useful?  How about spotlighting specific products for specific regions that are key success factors in profitability?

Do you have an idea?  Take one, leave one!  Share it with us.

Please follow and like us:
RSS
Facebook
PINTEREST
LinkedIn
 

5 Comments

  1. This is great Kyle! I was looking for alternatives today to Datamaps when limit is reached. This solves that problem.

     
  2. You can probably reduce the looping by looping the members having data, this can be achieved by aggregation + datagridbuilder

     
    • In this example, we are moving data to an ASO cube with exactly the same conditionality. I would bet that the time it would take to consolidate would be longer than pushing nothing as when there is no data the push is very quick. I think checking to see if data exists in many situations is a great idea, though. Great idea.

       
  3. Kyle,

    can you think of any way to run any of the processes in parallel? Parallel processing some processes (particularly in the context of ASO calculations) is incredibly optimal from a performance perspective.

    In many circumstances, splitting a task into smaller subsets within ASO procedural calculations can greater reduce the length of the calculation compared to running it for the entire group – this optimisation is further heavily optimised if those subsets can be isolated from each other and run in parallel.

    Most groovy examples seem to rely upon Gpars (http://www.gpars.org/) a parallelism framework for Groovy and Java which doesn’t seem to be available in PBCS.

    Any thoughts?

    Cheers
    Pete

     
    • I don’t believe this is available, but maybe down the road? I have actually been playing with a business rule to multi-thread it. What I am doing now is using a ruleset and running the n calculations using the parallel option in the ruleset. I created a rule that will take a predetermined volume of entities and execute the logic on those entities, and duplicated it 2 more times (changing the range to be executed on). For example, lets assume I have 3 rules. The first runs on the first third of the entities. The second rule runs on the second third. The third rule runs on the final third. Each rule is dynamic to the number of level 0 members in Entity. If there are 8 entities, 1-3 would be rule 1, 4-6 would be rule 2, and 7-8 would be rule 3. I am actually working on a post to walk through this, but this is a teaser! You could theoretically create a template and pass the number of times you want to split the entities (example was 3 in this case) and which range you want to execute. Currently, I just duplicated the rule and change the variable I created for these two options.

       

Leave a Reply

Your email address will not be published. Required fields are marked *