Groovy, Game Changing Technology at Breakthru Beverage Group – Webinar Invitation

Join Us On March 22, 2018

On March 22, 2018, I am hosting a webinar featuring the work delivered at Breakthru Beverage Group in Chicago, Illinois.  Breakthru had the traditional challenges, but it had some additional obstacles other don’t.  With requirements of entering budget at any level, complex allocation and seeding logic, and the need for consolidated reporting in real time, we had to get creative.  Welcome, Groovy Calculations!  Groovy calculations were released in June of 2017, just in time to be a key resource to solve the previously stated problems.   This application will highlight solutions like changing a product price at a consolidated level, and seeing it allocated down to delivery channel, material group, and company.  It will show how we consolidated and pushed to results to the P&L applications in seconds.

It is easy to participate, you can RSVP TODAY.

I will discuss the architecture, the challenges, and how we used Groovy to do things never before possible in Hyperion Planning.

Although we will be discussing the technology used, this is not a technical discussion on how to write Groovy calculations.  If you are an administrator, user, or owner of PBCS, we will highlight challenges you are likely facing, and how to overcome them using Groovy at a more functional level.  If you are looking to purchase or move to the cloud, this presentation will educate you on the possibilities now available with the new functionality of Groovy calculations.

Agenda

  • Introduction: Setting the expectations and introducing the speakers
  • Application Overview: Application, purpose, top down planning, and seeding
  • Performance Challenges: Product updates, allocations, long wait times for consolidated reporting
  • Real Time Reporting: How Groovy allowed us to overcome performance issues and enable real time consolidated reporting
  • The Groovy 411: Live demo showing how Groovy Calculations solved performance issues
  • More Than Performance: Live demo showing other enhancements Groovy provides, like user input validation
  • Finishing Up: Q/A, review, and opportunities for next steps to setup an optimization assessment

The Official Invitation

Top Down and Bottom Up Planning at Breakthru Beverage Group

Planners are always looking for real time reporting and faster feedback.  They are looking to make the forecasting and planning process faster by using historical trends and the ability to enter data at any level, enter growth factors, and drive the results down to the lowest level of the business.  They want instant feedback on consolidated results.

Join this webcast and hear from the VP of Financial Planning & Analysis at Breakthru Beverage Group on how they are using Oracle Planning and Budgeting Cloud Service (PBCS) integrated with game changing technology, Groovy, to improve speed and performance across planning processes.

Leave this session with an understanding on how Breakthru Beverage:

  1. Attained strategic benefits of building a driver based budget and forecasting application with the ability to seed product level data and apply growth rates consolidated levels to effectively build a bottoms up plan.
  2. Leveraged work force planning to include the ability to allocate people over multiple cost centers and companies.
  3. Developed a technical architecture and strategy to allow this to happen and integrate with the higher level P&L in real time.

RSVP today and learn how you can take advantage of Groovy.




Exporting Data in PBCS With Business Rules

Introduction

If your environment is a cloud product, whether it be PBCS or ePBCS, one thing that is critical to understand is the backups produced in the Migration area, may not be what you think.  Learning this after the fact may have negative consequences on your ability to restore data.  In the migration, the Essbase Data section is a copy of the pag, ind, and otl files.  When this is used to restore data, it restored the entire database.  This includes data and metadata.  This may be OK for many situation, but it won’t help you if

  • only specific data is required to be restored
  • specific data has changed and needs to be excluded from the restore
  • corruption exists in the database and all data is required to be restored
  • The pag files that hold the data are not readable
  • The size of the backup is quite large as it includes all data, and upper level data is normally exponentially larger than just level 0 data

Text Data Export

Business Rules can be written to export data to the Inbox/Outbox that is delimited with a few formatting options.  The entire database can be included.  With fix statements, specific data can be isolated.  So, forecast could be exported to a file, plan another, and actuals a third.  Specific accounts, entities, and/or products can be isolated in cases when specific data was inadvertently changed or deleted.  This file is a text file that can be opened in any text editor, Microsoft Excel, a database, or any other application that you open text files to view or manipulate.

Example Business Rule

/* Set the export options */
 SET DATAEXPORTOPTIONS
 {
 DataExportLevel LEVEL0;
 DataExportDynamicCalc OFF;
 DataExportNonExistingBlocks OFF;
 DataExportDecimal 4;
 DataExportPrecision 16;
 DataExportColFormat ON;
 DataExportColHeader Period;
 DataExportDimHeader ON;
 DataExportRelationalFile ON;
 DataExportOverwriteFile ON;
 DataExportDryRun OFF;
 };
 
 FIX(@Relative("Account", 0),
     @Relative("Years", 0),
     @Relative("Scenario", 0),
     @Relative("Version", 0),
     @Relative("Entity", 0),
     @Relative("Period", 0), 
     @Relative("custom_dim_name_1", 0),
     @Relative("custom_dim_name_1", 0),
     @Relative("custom_dim_name_1", 0))

  DATAEXPORT "File" "," "/u03/lcm/filename_xyz.txt" "";

 ENDFIX

Some Hints

There are a few things that you may encounter and be a little confused about, so the following are a few things that might help.

  1. To see the data export, it must be exported to /u03/lcm/, which is the equivalent of your inbox.  Any file name can be used.
  2. Setting DataExportLevel to 0 will export the level 0 blocks, not the level 0 members.  If there are any stored members in any of your dense dimensions, they will be exported unless the dimension is also in the fix to include ONLY level 0 members.
  3. The fix statement works the same as a fix statement in any business rule, so the data to be exported can be easily defined.
  4. My experience exporting dynamic calculated members drastically increases the time of the export.
  5. The export options are all pretty logical.  Some work in conjunction with each other and others are ignored depending on dependent setting values.  These are documented for version 11.1.2.4 here.
  6. This process can be automated with EPM Automate and include the download and time stamp of the backup for later use.

Conclusion

There are benefits to both types of backups.  My preference is to either run both nightly, or run just the Business Rule.  By having both, the administrator has the option of restoring the data as needed, in the way that is most effective.  Having both provides the ultimate flexibility.  If space is an issue, exclude the data option in the Migration and just run the business rule.

 

From Oracle’s Documentation

DataExportLevel ALL | LEVEL0 | INPUT

  • ALL—(Default) All data, including consolidation and calculation results.
  • LEVEL0—Data from level 0 data blocks only (blocks containing only level 0 sparse member combinations).
  • INPUT—Input blocks only (blocks containing data from a previous data load or grid client data-update operation). This option excludes dynamically calculated data. See also the DataExportDynamicCalc option.

In specifying the value for the DataExportLevel option, use these guidelines:

  • The values are case-insensitive. For example, you can specify LEVEL0 or level0.
  • Enclosing the value in quotation marks is optional. For example, you can specify LEVEL0 or “LEVEL0”.
  • If the value is not specified, Essbase uses the default value of ALL.
  • If the value is incorrectly expressed (for example, LEVEL 0 or LEVEL2), Essbase uses the default value of ALL.

Description

Specifies the amount of data to export.

DataExportDynamicCalc ON | OFF

  • ON—(Default) Dynamically calculated values are included in the export.
  • OFF—No dynamically calculated values are included in the report.

Description

Specifies whether a text data export excludes dynamically calculated data.

Notes:

  • Text data exports only. If DataExportDynamicCalc ON is encountered with a binary export (DATAEXPORT BINFILE …) it is ignored. No dynamically calculated data is exported.
  • The DataExportDynamicCalc option does not apply to attribute values.
  • If DataExportLevel INPUT is also specified and the FIX statement range includes sparse Dynamic Calc members, the FIX statement is ignored.

DataExportNonExistingBlocks ON | OFF

  • ON—Data from all possible data blocks, including all combinations in sparse dimensions, are exported.
  • OFF—(Default) Only data from existing data blocks is exported.

Description

Specifies whether to export data from all possible data blocks. For large outlines with a large number of members in sparse dimensions, the number of potential data blocks can be very high. Exporting Dynamic Calc members from all possible blocks can significantly impact performance.

DataExportPrecision n

n (Optional; default 16)—A value that specifies the number of positions in exported numeric data. If n < 0, 16-position precision is used.

Description

Specifies that the DATAEXPORT calculation command will output numeric data with emphasis on precision (accuracy). Depending on the size of a data value and number of decimal positions, some numeric fields may be written in exponential format; for example, 678123e+008. You may consider using DataExportPrecision for export files intended as backup or when data ranges from very large to very small values. The output files typically are smaller and data values more accurate. For output data to be read by people or some external programs, you may consider specifying the DataExportDecimal option instead.

Notes:

  • By default, Essbase supports 16 positions for numeric data, including decimal positions.
  • The DataExportDecimal option has precedence over the DataExportPrecision option.

Example

SET DATAEXPORTOPTIONS
    { 
    DataExportPrecision 6;
    DataExportLevel ALL;
    DataExportColHeader "Measures";
    DataExportDynamicCalc ON;
    };
    DATAEXPORT "File" "," "output1.out";

Initial Data Load Values

"Sales" "COGS" "Margin" "Marketing" "Payroll" "Misc" "Total Expenses" "Profit" "Opening Inventory" "Additions" "Ending Inventory" "Margin %" "Profit %" 
"100-10" "New York"
"Jan" "Actual" 678123456.0 271123456.0 407123456.0 941234567890123456.0 51123456.0 0 145123456.0 262123456.0 2101123456.0 644123456.0 2067123456.0 60123456.029 38123456.6430
"Feb" "Actual" 645123 258123 3871234 9012345 5112345 112345678 14212345 24512345 2067123456 61912345 20411234 601234 37123456.98 
"Mar" "Actual" 675 270 405 94 51 1 146 259 2041 742 2108 60 38.37037037037037 
"Qtr1" "Actual" 1998 799 1199 278 153 2 433 766 2101 2005 2108 60.01001001001001 38.33833833833834

Exported Data Format

"Sales","COGS","Margin","Marketing","Payroll","Misc","Total Expenses","Profit","Opening Inventory","Additions","Ending Inventory","Margin %","Profit %","Profit per Ounce","100-10","New York"
"Jan","Actual",6.78123e+008,2.71123e+008,4.07e+008,9.41235e+017,5.11235e+007,0,9.41235e+017,-9.41235e+017,2.10112e+009,6.44123e+008,2.06712e+009,60.0186,-1.388e+011,-7.84362e+016
"Feb","Actual",645123,258123,387000,9.01235e+006,5.11235e+006,1.12346e+008,1.2647e+008,-1.26083e+008,2.06712e+009,6.19123e+007,2.04112e+007,59.9886,-19544.1,-1.05069e+007
"Mar","Actual",675,270,405,94,51,1,146,259,2041,742,2108,60,38.3704,21.5833

DataExportDecimal n

Where n is a value between 0 and 16.

If no value is provided, the number of decimal positions of the data to be exported is used, up to 16 positions, or a value determined by the DataExportPrecision option if that is specified.

Description

Specifies that the DATAEXPORT calculation command will output numeric data with emphasis on legibility; output data is in straight text format. Regardless of the number of decimal positions in the data, the specified number is output. It is possible the data can lose accuracy, particularly if the data ranges from very large values to very small values, above and below the decimal point.

Notes:

  • By default, Essbase supports 16 positions for numeric data, including decimal positions.
  • If both the DataExportDecimal option and the DataExportPrecision option are specified, the DataExportPrecision option is ignored.

Example

SET DATAEXPORTOPTIONS
    {DataExportDecimal 4;
    DataExportLevel "ALL";
    DataExportColHeader "Measures";
    DataExportDynamicCalc ON;
    };
    DATAEXPORT "File" "," "output1.out";

Initial Data Load Values

"Sales" "COGS" "Margin" "Marketing" "Payroll" "Misc" "Total Expenses" "Profit" "Opening Inventory" "Additions" "Ending Inventory" "Margin %" "Profit %" 
"100-10" "New York"
"Jan" "Actual" 678123456.0 271123456.0 407123456.0 941234567890123456.0 51123456.0 0 145123456.0 262123456.0 2101123456.0 644123456.0 2067123456.0 60123456.029 38123456.6430
"Feb" "Actual" 645123 258123 3871234 9012345 5112345 112345678 14212345 24512345 2067123456 61912345 20411234 601234 37123456.98 
"Mar" "Actual" 675 270 405 94 51 1 146 259 2041 742 2108 60 38.37037037037037 
"Qtr1" "Actual" 1998 799 1199 278 153 2 433 766 2101 2005 2108 60.01001001001001 38.33833833833834

Exported Data Format

"Sales","COGS","Margin","Marketing","Payroll","Misc","Total Expenses","Profit","Opening Inventory","Additions","Ending Inventory","Margin %","Profit %","Profit per Ounce"
"100-10","New York"
"Jan","Actual",678123456.0000,271123456.0000,407000000.0000,941234567890123520.0000,51123456.0000,0.0000,941234567941246980.0000,-941234567534246910.0000,2101123456.0000,644123456.0000,2067123456.0000,60.0186,-138799883591.4395,-78436213961187248.0000
"Feb","Actual",645123.0000,258123.0000,387000.0000,9012345.0000,5112345.0000,112345678.0000,126470368.0000,-126083368.0000,2067123456.0000,61912345.0000,20411234.0000,59.9886,-19544.0820,-10506947.3333
"Mar","Actual",675.0000,270.0000,405.0000,94.0000,51.0000,1.0000,146.0000,259.0000,2041.0000,742.0000,2108.0000,60.0000,38.3704,21.5833

Output Format Options

DataExportColFormat ON | OFF

  • ON—The data is output in columnar format.
  • OFF—Default. The data is output in non-columnar format.

Description

Specifies if data is output in columnar format. Columnar format displays a member name from every dimension; names can be repeated from row to row, enabling use by applications other than Essbase tools. In non-columnar format, sparse members identifying a data block are included only once for the block. Non-columnar export files are smaller, enabling faster loading to an Essbase database.

Notes

Do not use the DataExportColFormat option in combination with the DataExportRelationalFile option, which already assumes columnar format for files destined as input files to relational databases.

Example

SET DATAEXPORTOPTIONS
 {
 DATAEXPORTCOLFORMAT ON;
 };
 FIX("100-10", Sales, COGS, Jan, Feb, Mar, Actual, Budget)
 DATAEXPORT "File" "," "d:\temp\test2.txt" ;
ENDFIX;

DataExportColHeader dimensionName

Description

Specifies the name of the dense dimension that is the column header (the focus) around which other data is referenced in the export file. Use the DataExportColHeader option only when you export data to a text file. For example, if from Sample Basic the Year dimension is specified, the output data starts with data associated with the first member of the Year dimension: Year. After all data for Year is output, it continues with the second member: Qtr1, and so on.

Notes

MaxL, ESSCMD, and Essbase exports do not provide a similar capability. With these methods, Essbase determines the focal point of the output data.

Exporting through Report Writer enables you to specify the header in the report script.

Example

SET DATAEXPORTOPTIONS {DATAEXPORTCOLHEADER Scenario;};

Specifies Scenario as the page header in the export file. The Scenario dimension contains three members: Scenario, Actual, and Budget. All Scenario data is shown first, followed by all Actual data, then all Budget data.

DataExportDimHeader ON | OFF

  • ON—The header record is included.
  • OFF—Default. The header record is not included.

Description

Use the DataExportDimHeader option to insert the optional header record at the beginning of the export data file. The header record contains all dimension names in the order as they are used in the file. Specifying this command always writes the data in “column format”.

Example

SET DATAEXPORTOPTIONS 
 {
 DATAEXPORTLEVEL "ALL"; 
 DATAEXPORTDIMHEADER ON; 
 };
FIX("100-10", "New York", "Actual")
 DATAEXPORT "File" "," "E:\temp\2222.txt" ;
ENDFIX;

Specifying the DataExporttDimHeader ON option while exporting Sample Basic writes the data in column format, with common members repeated in each row. The data begins with a dimension header, as shown in the first two rows of the example file below:

"Product","Market","Year","Scenario","Measures"
"Sales","COGS","Marketing","Payroll","Misc","Opening Inventory","Additions","Ending Inventory"
"100-10","New York","Jan","Actual",678,271,94,51,0,2101,644,2067
"100-10","New York","Feb","Actual",645,258,90,51,1,2067,619,2041
"100-10","New York","Mar","Actual",675,270,94,51,1,2041,742,2108
"100-10","New York","Apr","Actual",712,284,99,53,0,2108,854,2250
"100-10","New York","May","Actual",756,302,105,53,1,2250,982,2476
"100-10","New York","Jun","Actual",890,356,124,53,0,2476,1068,2654
"100-10","New York","Jul","Actual",912,364,127,51,0,2654,875,2617
"100-10","New York","Aug","Actual",910,364,127,51,0,2617,873,2580
"100-10","New York","Sep","Actual",790,316,110,51,1,2580,758,2548
"100-10","New York","Oct","Actual",650,260,91,51,1,2548,682,2580
"100-10","New York","Nov","Actual",623,249,87,51,0,2580,685,2642
"100-10","New York","Dec","Actual",699,279,97,51,1,2642,671,2614

DataExportRelationalFile ON | OFF

  • ON—The output text export file is formatted for import to a relational database.
    • Data is in column format; sparse member names are repeated. (The DataExportColFormat option is ignored.)
    • The first record in the export file is data; no column heading or dimension header is included, even if specified. (The DataExportColHeader and DataExportDimHeader options are ignored.)
    • Missing and invalid data is skipped, resulting in consecutive delimiters (commas) in the output. The optional “missing_char” parameter for DATAEXPORT is ignored
  • OFF—Default. The data is not explicitly formatted for use as input to a relational database.

Description

Using the DataExportRelationalFile option with DATAEXPORT enables you to format the text export file to be used directly as an input file for a relational database.

Example

SET DATAEXPORTOPTIONS {
 DataExportLevel "ALL";
 DataExportRelationalFile ON;
};

FIX (Jan)
 DATAEXPORT "File" "," c:\monthly\jan.txt
ENDFIX;

Processing Options

DataExportOverwriteFile ON | OFF

  • ON—The existing file with the same name and location is replaced.
  • OFF—Default. If a file with the same name and location already exists, no file is output.

Description

Manages whether an existing file with the same name and location is replaced.

DataExportDryRun ON | OFF

  • ON—DATAEXPORT and associated commands are run, without exporting data.
  • OFF—Default. Data is exported

Description

Enables running the calculation script data export commands to see information about the coded export, without exporting the data. When the DataExportDryRun option value is ON, the following information is written to the output file specified in the DATAEXPORT command:

  • Summary of data export settings
  • Info, Warning, and Error messages
  • Exact number of blocks to be exported
  • Estimated time, excluding I/O time.

Notes

  • The DataExportDryRun option does not work with exports to relational databases.
  • If you modify the script for reuse for the actual export, besides removing the DataExportDryRun option from the script you may want to change the name of the export file.

Example

SET DATAEXPORTOPTIONS 
 {
 DataExportLevel "ALL";
 DataExportColHeader "Measures";
 DataExportColFormat ON;
 DataExportDimHeader ON;
 DataExportDynamicCalc OFF;
 DataExportDecimal 0;
 DataExportDryRun ON;
 DataExportOverwriteFile ON;
 };

FIX("Qtr1")
 DATAEXPORT "File" "," "E:\temp\log.txt" ;
ENDFIX;



Adventures in Groovy – Part 9: Ohio Valley OAUG Presentation

I was lucky enough to be invited to talk about the new Groovy Calculation in PBCS and ePBCS at the Ohio Valley OAUG meeting today.  If you have read the Groovy series, you know how strongly I feel about the advancements in Hyperion Planning with the addition of Groovy Calculations.  I want to share the presentation with a wider audience.  This is a functional overview for those who are new to the concepts.  This also introduces readers how to develop their first Groovy Calculation, and provides some examples.






Updating EPM Automate Just Got Easier

Introduction

One of the challenges with EPM Automate has been eliminated this month.  Although it was a minor issue, the need to update EPM Automate regularly was something that had to be considered monthly.  Administrators of PBCS do not always have access to the on-premise footprint, like a Windows VM, that runs the automation.  Even more frequently, access to the production VM is only available to IT staff, so updating that environment is more strict, and has to be scheduled.  That schedule doesn’t always sync up to the changes in PBCS.

Update Command

As of the 02.18 release, a new command is available.  The “update” command will automatically download, and silently install, the newest version of the EPM Automate utility.  Once logged in, execute the following command.

epmautomate upgrade

If you are a frequent visitor, you know I am a fan of PowerShell.  All the automation I do with EPM Automate in the Windows environment utilizes this free scripting tool.  This command has been added to all my new projects so there is no manual effort in keeping the utility current.  This also eliminates any issues that pop up due to incompatibility issues with PBCS.

In my reusable scripts, this new function has been added.

function EPMA_Upgrade{
  $CmdLine = "upgrade"
  $ReturnCode = Start-Process "$EPMAutomate_Path\epmautomate.bat" $CmdLine -Wait -passthru -WindowStylee $ShowDosWindow
  LogResult "EPM Automate has been updated" $ReturnCode.ExitCode
}

During a nightly process, the function is referenced.  If the request fails, the administrators are emails.

$ReturnResult = EPMA_Upgrade
if( $ReturnResult -eq 1){Send_Email_Error;Exit}

Summary

This is a welcome addition.  Now, administrators and application owners don’t have to worry about using new features or keeping EPM Automate in sync with the active version of PBCS.  As the great, Forrest Gump would say – “One Less Thing.”




Supercharge PBCS with PowerShell

Last year I presented an in-depth overview on PowerShell and how it can be utilized in the Hyperion environment.  I have been asked many times to share it.  The presentation is a technical presentation and is meant to provide a strong introductory level foundation for anybody that wants to start using PowerShell to automate repetitive tasks.  I have built a large library of shared functions that can be used to automate PBCS and ePBCS, and I plan to share pieces of this in future posts.

For now, anybody that is interested in learning PowerShell, or has used it and doesn’t know why some things work and others don’t, this might prove to be a valuable resource.



 




Ohio Valley OAUG – Getting Groovy in Louisville

I have been selected to speak at the OVOAUG on February 16, 2018.  I have been there before, and it is a very nice group of people to engage with.  If you are in the area, or would like to hear more about how Groovy in PBCS can change the landscape of performance, user interaction, improvement of data input, and reduced user frustration, please go to http://ohio.communities.oaug.org/ and register.  I would love to see as many of you there as possible.

Here is the agenda.  It is going to be a functional overview, but we will touch on how to start writing Groovy, and if you show up, I will be more than happy to talk before and/or after the session.




PBCS Data Map / Smart Push Has Data volume Limits

Introduction

When moving data in PBCS with Data Maps or Smart Pushes, they have limits on how much data can be moved.  The amount of data can be seen in the logs, and look something like this.

Failure

Exporting data…
Exported data file(s) size is: 207.1 MB.
Push Data failed. Error: Exported data size of data map that is being executed from groovy is more than permissible amount: 100 MB.

Success

Exported data file(s) size is: 464.7 MB.
EXPORT elapsed time: 39584
IMPORTING – AppName: AreakFin
TRANSFORM elapsed time: 63634
IMPORTING elapsed time: 21166
TOTAL elapsed time: 124553

Prior to the Feb, 2018 release, the following did not always hold true.  If you are/were seeing inconsistencies, see Bug Report: Push Data failed.  It also includes information about how the data cap works, as it is different between Data Maps and Smart Pushes, which is worth reading.

Data Movement Limits Identified

I got the following information from Oracle, and it is useful if you are using the data movement functionality.  When these are developed, it is a good idea to evaluate the size and plan for growth.  If the production data movements are nearing the thresholds, it is recommended to be proactive and try to reduce the POV that is used to move the data.  If it can’t be reduced, one option is to split it into multiple pushes which can be done with Smart Pushes on the Data Form save, or with Groovy.  Groovy also allows you to further condense the POV by dynamically changing the POV based on the cells edited, which is the most productive and efficient way to handle these.

So, here is what was documented.  The data limits imposed on the movement methods are below.

  • There is not a cap when running a Data Map
  • When executing the following, there is a cap of 100MB
    • Smart Push on a Data Form
    • Smart Push via a Groovy Calculation
    • Data Map via a Groovy Calculation

Summary

If you are not seeing this, I would recommend opening a ticket with Oracle to resolve.  I will be writing a post explaining how to execute and override POVs in Smart Pushes and Data Maps with a Groovy Calculation in the near future, so look for an article in my Adventures in Groovy series.

 




Bug Report: Push Data failed. Error: Exported data size violates permissible amount: 100 MB

Introduction

Data Map Error:

Push Data failed. Error: Exported data size of data map that is being executed from groovy is more than permissible amount: 100 MB.

If you are confused, join the club.  The results are inconsistent as some data pushes are successful that are over the 100MB limit.  So, why the following error?

Exporting data…
Exported data file(s) size is: 207.1 MB.
Push Data failed. Error: Exported data size of data map that is being executed from groovy is more than permissible amount: 100 MB.

Clarification

A point of clarification for those of you who are new to data maps and smart pushes. If you think they are the same thing, here is the clarification from Oracle, in my words.

  • A Data Map is any data map executed from the Data Map area, whether it is through the UI, EPM Automate, or the REST API.
  • A Smart Push is essentially any Data Map executed from a Data Form.

Although they seem like the same function, they have different logical areas in execution.  My understanding is that a Data Map should never hit a cap on memory.  A Smart Push does have a cap.  Not only that, the way it was explained to me is that there is a hard cap on how much memory Smart Pushes can consume, and this is a global limit, not a limit per Smart Push.  So, the reason you are experiencing inconsistent results with Smart Pushes is quite simple.  The more Smart Pushes that are executed in a time window, the more memory is used.  So, you may never have a problem in a Test, or at night, but during UAT or in Prod, successful execution may be intermittent.  The reason is when these are run periodically, that limit may never be reached.  Run multiple times by multiple people in short durations will cause the limit to be consumed.

This bug only applies to Data Maps.

The Problem

The same Data Map executed results in two different outcomes.

Failure

Exporting data…
Exported data file(s) size is: 207.1 MB.
Push Data failed. Error: Exported data size of data map that is being executed from groovy is more than permissible amount: 100 MB.

Success

Exported data file(s) size is: 464.7 MB.
EXPORT elapsed time: 39584
IMPORTING – AppName: AreakFin
TRANSFORM elapsed time: 63634
IMPORTING elapsed time: 21166
TOTAL elapsed time: 124553

So, if there is a cap at 100MB, what gives?  If you have seen the following error, and wondered why the same Data Map sometimes runs and sometimes fails, it is related to Bug 27161430.

The Fix

Although support was difficult to navigate, I was lucky enough to be at an Oracle session in Virginia and talked to a developer.  He immediately requested the ticket number and said flat out, this is a problem.  I don’t want to name names, so a huge thank you to an unidentified developer at Oracle for giving me a few minutes and helping, because I don’t believe it would have been escalated to the development team otherwise.

The ticket was updated yesterday, and the fix is slated to be released in February. Although this is an internal bug, here are the details.

Bug 27161430 – PBCS: EXPORTED DATA SIZE OF DATA MAP THAT IS BEING EXECUTED FROM GROOVY IS MORE




Bug Report: Groovy SubstitutionVariable Class Not Functioning

If you have jumped into Groovy Calculations, one of the things you likely would try to do is grab a value for a sub var.  Hopefully, you haven’t spent too much time before reading this.  I wasted a ton of time trying to get this to work before I opened a ticket with Oracle.  This class is NOT available yet and was inadvertently included in the public docs at https://docs.oracle.com/cloud/latest/epm-common/GROOV/.  The development team told me they are going to remove it from the API docs.

Without it, the best way I have found to get this value is by adding it to a grid and pulling the dimension value from that column/row.  For example, if your periods are in the columns and you are need the value of a substitution variable that holds the current month, add the substitution variable to the first column for the variable that holds the current month of actuals, and hide the column so the users are not confused with its purpose/location.  If you make use of getCellWithMember, and don’t pass it any parameters, it will pull the top left cell in the grid, even if it is hidden.  Since this is a period member, use the getPeriodName method.  If it is a custom dimension, the getMemberName will provide what you need.

Here is an example.  The grid’s first column is hidden, and the period is set the substitution variable that represents the last month of actuals for the year.

String sCurMonth = '"' + operation.grid.getCellWithMembers().getPeriodName() + '"'
 - OR - 
String sCurMonth = '"' + operation.grid.getCellWithMembers().getMemberName("Period") + '"'

The sCurMonth variable can be used where needed in the Groovy calculation to obtain the substitution variable value.

 




Bug Report: EPM Automate

A bug with EPM Automate has been identified.  This is not replicated on every version or client.  Please pay attention to any EPM Automate  updates installed.  In the past, I was able to install the latest version without any issues.  Currently, the install prompts users to uninstall the older version.  In the past, this worked as expected, but now, when selected, this has no effect and the new EPM Automate is NOT installed, leaving you with the existing version.  I noticed that this goes VERY fast, like nothing was updated.  If you experience a similar, sub second installation, you may have the same issue.

Oracle has assigned a bug number to this issue, but no release date has been assigned to a fix.  The following is not a public bug.

Bug 25429167 : EPMAUTOMATE NO LONGER PROPERLY REMOVES OLD VERSION.

When you update EPM Automate, validate the install worked by running EPM Automate and checking the version number.

The version should generally reflect the date of download, if you download this from Oracle’s website.  The version above signifies a release of December, 2017 (17.12).

If the version doesn’t change and shows a prior install version date, go to Control Panel, select Programs, and Uninstall a Program.  Find EPM Automate and uninstall it.  Once this is completed, install the newest version from Oracle’s website and you should be good to go.

Happy Holidays!