Getting The Most From Your Oracle Support

Nobody will argue that Oracle Support is a word class experience.  Like it or not, the service is here to support thousands and individuals can get lost in the process.  If you navigate the process correctly, you can get results.

What Do Do First

Your first step should be to troubleshoot the issue.  Do everything you can for support.  The more you provide, the less “did you plug in the computer” type questions that delay you getting help  you will get.  Be clear and concise.  The first few levels of support are likely less experienced than you and will only be able to provide novice level help most of the time.

Next, make sure the content of your support ticket is complete.  Don’t leave any questions.  What I have found to be extremely helpful is to video the issue.  Make sure you show the problem and walk through the calculation or script or whatever is the source of the issue.  This doesn’t have to be a professional video.  There are free screen recording applications.  There are also some pretty inexpensive ones.  If you use a Mac, ScreenFlick is 35 bucks and well work it.  It gives you the basics.  If you use SnagIt, it also works just fine.

If you are a Windows user, there is a nice free option, but you probably haven’t noticed it, and for good reasons.  If you look through the Start menu, you won’t find any reference to a screen recorder. You’re not missing it… it simply isn’t there. That’s because rather than making the screen recorder a standalone app, Microsoft included it as a feature of the Xbox Game Bar in Windows 10.

Your Second Step

After you open a ticket and provide your video of the issue, proving there is an issue without a doubt, follow these steps

  • Make sure you get a duty manager assigned and push for escalation
  • Call 1-800-223-1711
  • Press 1 and enter your SR number
  • Press 2 to speak with a manager
  • Do not press 1 to speak with Engineer, this will likely result in going straight to Voicemail
  • Tell the Support Hub the critical impact you are facing
  • Request a Duty Manager callback ASAP
  • Repeat request level 2, then level 3, to level n, until you get what you want
  • Call hourly until you get what you want – the squeaky wheel gets the grease

During this process, here are some things make sure you get the most out of the experience.

  • Speak with the support hub manager
  • Describe the impact, include key project milestones at risk and communicate what you need for progress
  • If your issue is CRITICAL you should ALWAYS make a request for a duty manager call back.
  • If you ask for a duty manager call back, wait 30-60 minutes until you receive the call and reiterate the problem
  • Be strategic with your request
  • Don’t just say you need to escalate – describe the impact and communicate what you need for progress
  • Keep in mind, your SR will NOT appear in an “escalated” state unless you negotiate this with a duty manager

That’s A Wrap

Should you have to do all this?  No.  The experience should be better, but yelling about it and banging your fists on the table won’t change anything.  You have to play the game and go through the process.  If you use these tactics, I guarantee you will improve the response.

 

 




PBCS/EPBCS Data Import Options

Importing data directly into PBCS can be done in two ways.  There is a planning format and an Essbase format.  Yes, it can be loaded through Data Management, but this is focused on explaining the direct import process, the formats, and the benefits to each.

Planning File Format

What Is The Planning File Format

Per Oracle, if data is exported to a file in the Planning format, you can use Notepad to change the database name in the file; for example, ASOCube, and then import the data file into the aggregate storage outline provided all dimensions exist.

This is fine and dandy if you are importing data exported from Planning.  Normally, there is a source that is giving the data to PBCS and a format has to be provided to that source.  The format is basically a delimited file with some quotes around a group of the fields.  Technically, there are 3 pieces of the file.

  1. One dimension has to be in the rows.
  2. One dimension has to be in the columns.
  3. The third section is the POV, or the remaining dimensions.

It looks like this.

Account,Jan,Feb,Mar,"POV",Plan Type Name
10020,40.10,50.60, 70.20,"Actual,Final,Company_10,FY18",Finance

Why Use The Planning Format?

There are some benefits and drawbacks of using this format.

  • Data loaded to smart list accounts can be loaded as the actual smart list label. This is great for applications where the data is provided in that format.  WFP data often provides data in the smart list value, like Part Time/Full Time.
  • When there are errors, the load continues through the end of the file.
  • The job console shows the number of records processed, the number of records that couldn’t be loaded, and the first x lines of errors with line numbers.  I will warn you that I have seen the console show all records loaded and nothing loaded.  I have also seen no rejected records when the file didn’t load completely.  So, when you are developing loads, I would recommend validating carefully and looking at the block counts to ensure the messaging in the console is correct.
  • The biggest drawback to loading the planning format is that it loads through Planning and because of some of the benefits, it is slower to load. This normally isn’t a huge issue because the loads are finished within a few minutes.
  • The format is a little odd and can sometimes be problematic to produce.  Certain dimensions are required in the row and column headers.  This is often challenging.  For instance, most applications require the months to be in the row or column areas.  Since they can’t be put in the POV, more than two months have to have multiple columns.  If the source can’t produce this easily, a file for each month has to be created.

Essbase File Format

Why Use The Essbase Format?

Loading directly to Essbase is faster and often times easier to produce the needed format.  It loads directly to Essbase so none of the mapping done when loaded through Planning (like smart lists) happens.  This is often a huge headache because the mapping has to be done in the source or in some middle ETL layer (assuming DM is not being used).  Another issue with using this method is that when a record rejects, nothing more is loaded.  The job console also does not show the records processed.

What Is The Essbase File Format

Per Oracle, (exported by an on-premises product). These files can be formatted in either Cell list or Column list format. Cell list format specifies a single cell value for each record, whereas Column list format specifies multiple cell values for each record. For example, if you specify Time on the column, you might have the values Jan, Feb, Mar, and so forth on the column, with 12 values for each record (that is, one for each month). Conversely, the Cell list equivalent would contain 12 records, one for each month. For information on exporting files from Essbase, see the on-premises Oracle Essbase Administration Services Online Help.

That doesn’t tell us much.  The format of this loading methodology is almost identical with the Planning format.  The thought process is a little different.  This file will hold a column for every dimension but has no header line.  Each dimension is preceded and terminated by quotes and delimited by a tab.  All the dimensions are listed first, followed by the value of that intersection.  The planning format shown above will look like this

"10020"       "Actual"      "Final"       "Company_10" "FY18" "Jan"  40.10
"10020"       "Actual"      "Final"       "Company_10" "FY18" "Feb"  50.60
"10020"       "Actual"      "Final"       "Company_10" "FY18" "Mar"  70.20

Last Call

Neither of these formats/methods are complicated, but the documentation is lacking when explaining exactly what they are.  There is little mention to the pros/cons.  Hopefully sharing this will fill in the gaps that exist in the documentation.

 




Oracle EPBCS/PBCS May Updates – Big Changes!

I don’t normally write up monthly updated, but this month there are a number of intriguing changes/updates/enhancements that are important to know.  Some may change existing processes.  This is not an exhaustive list, but these are things I think all of us should take note of.

The changes that you will see in the UI may not be visible unless you clear your cache.

EPMAutomate

  • You MAC users are going to like this, it can now be installed and work on a MAC. By the way, I am one of them.  I can’t wait until the beta is finished for the Smart View to work with office360, meaning MAC, Ipad, etc.
  • You can now run multiple instances of EPMAutomate. So, logging out of one will not log you out of the other.  This will help those that want to run test and prod on one VM.  It will also remove any issue of one process stepping on another.
  • Admin mode can be enabled with EPMAutomate, which was a huge missing piece, in my opinion.
  • A sortMember command to order members in dimensions is now available.

Forms

  • No more composite forms, sorry guys!

Smart Push

  • There is a new Smart Push option called Use Database Suppression that enables you to apply row suppression at the database level instead of at the application level, thus eliminating the impact on query thresholds and improving Smart Push wait times.

Settings

  • There is a new setting to set the accounts that are linked between plan types to false. This will use the  a new attribute of HSP_LINK (not HSP_NOLINK).

Groovy

  • We now have the ability to use the new suppression option on the data map / smart push. This will improve performance and reduce the need to break up pushes in groups of members.
  • Groovy can now run jobs. This means a rule can execute other jobs within that rule.  They show as tasks in the job console.  This is limited to the execution of rules, rulesets, and templates.
  • All RTPs are now supported in ASO.

Smart View (AMEN on some of these as us old timers will love) – You will want to download the updated version to take advantage of these.

  • In-grid, cell-based POV—POV members are placed on the grid instead of in the POV toolbar.
  • Submit without refresh—Using the default Submit Data button in the Smart View ribbon, all data cells in a single grid (modified cells and unchanged cells) are marked dirty and submitted. Once the submit operation is complete, the entire grid will be refreshed.
  • Enhanced free-form—Users can insert empty columns and rows anywhere in a grid and change the alias table. Additionally, supports member auto-refresh where deleted members are returned to the grid upon refresh.
  • Multiple-grid ad hoc—Multiple ad hoc grids can be placed on the same Excel worksheet. With multiple-grid ad hoc, users can submit data from any grid on the sheet. Grids based on aggregate storage cubes and block storage cubes are supported on the same sheet. Each grid is independent; for example, if required, you can change the alias table for only one grid on the sheet.
  • Users will now get a warning when the version installed is out of date. Hopefully this can be turned off!

General

  • Jobs can now have a new return, success with warnings. An example of this might be loading data.  The job finished successfully but there were rejected records.  Make sure your automation accounts for this!

DM

  • There is now an option to not run the validation when data is loaded. For those of you that despise the performance challenges of using DM, this might be a game changer.  The longest part of the process is the actual validation, not the import or export to Planning.

Modules in ePBCS

  • Rolling forecast can now be configured for length.
  • There were some critical bugs that were fixed in WFP (I know first-hand).
  • Some CapX forms were improved as well.



Change The Spread Method In An Existing PBCS Application

Changing application settings was always a little bit of a pain with an on-premise Planning application.  It was a time intensive task of recreating the application, artifact groups at a time.  If you were a little bit of a risk taker, you might have figured out that there were fields in the relational repository that could be updated.  Is there an easy way to do this with PBCS?  Since the repository is not accessible in the cloud, legacy methods are not available.  That said, I think it is easier and seemingly less risky with a PBCS application.

How To Perform Configuration Changes

The basic steps are very simple.  If migrations are new to you, take caution and make sure the application backup is readily available.  Always try this in the test environment first.

  1. Run a migration backup by going to the navigation menu and selecting Migration under the Tools header and clicking the Backup button.
  2. Download the backup by changing the view to Snapshots and selecting the ellipse to the right of the created migration and select download.
  3. Unzip the downloaded file to a new folder.
  4. Edit the appropriate file / change the settings (see below).
  5. Zip the files previously unzipped to a new zip file.  Make sure the parent folder is not included.  The folder and files unzipped should be the root of the new zip file.
  6. Upload to zip file created above by going back to Migration under the Tools header in PBCS.  Change the view to Snapshots and click the upload button.  If the backup is too large, you may need to use EPM Automate.
  7. Delete the existing application by moving to the Overview area which can be found in the navigation menu under the Application header, select the Action button and click Remove Application.
  8. After the application is deleted, log back in to PBCS and choose the Migration option in the navigation menu.  Create a new application with the updated zip file by clicking the ellipse and choosing Import.

Updating The Configuration

Before proceeding, I have tried this by only migrating the configuration (not recreating the application) and it didn’t work.  The application had to be recreated so the entire backup was required.  So, although only one file is updated, it is still important to take a full backup.

The migration files hold everything you need to update pieces and parts of an application.  In this situation, the focus will be on the configuration options, specifically the spread method.  Since this is needed to create an application, the spread method is often not decided on and may need to be changed later.  To change it after the fact is pretty easy.  Navigate to and open the Application Definition.xpad file.  This is inside the HP-xxx folder.  The xxx represents the name of the application.  This is a text file so it can be opened in any text editor.  If notepad is used, the line feeds won’t be visible and all the lines will be smashed together.  Notepad++ is a recommended alternative.  See below for the full path of the xpad file.

The Application Definition.xpad file, when opened, should look similar to the following.  This is not a full representation of the file so expect it to be larger when opened.

Scroll down a short way and find the WeeksDistribution property.  It will likely have a 445 pattern or will read Even.  Change this option to the preferred method and save the file.  The options are 445, 454, 544, and Even.

There are, as you see, other options that can changed.  Although I have only changed the spread method, I am moderately confident that the others, if changed here, would be reflected when the zip file is imported.

 <Calendar>
  <BaseTimePeriod>12 Months</BaseTimePeriod>
  <WeeksDistribution>Even</WeeksDistribution>
  <AppStartYear>2015</AppStartYear>
  <FiscalYearStartDate>SameCalendarYear</FiscalYearStartDate>
  <StartMonth>Jan</StartMonth>
  <NumberOfYear>15</NumberOfYear>
  <AllYearsParent>Y</AllYearsParent>
</Calendar>

Note that some of these are not options when the application is created, like whether the application has a parent for years.  I have not tinkered with this for that reason, but it is there if you want try.

Importing A File Too Large For The UI

As stated above, large files cannot be loaded through the PBCS UI.  I have run into this before and believe the maximum size of the file that can be uploaded is less than 2GB.  EPM Automate has an upload file command that overcomes this.

epmautomate login username userpassword https://planning-domain.pbcs.datacenter.oraclecloud.com domain
epmautomate uploadfile "[path]\[filename].zip"

example:  
epmautomate login kgoodfriend GoJackets https://planning-A123345.pbcs.us6.oraclecloud.com A12345
epmautomate uploadfile "c:\backups\PBCS_In2Hyperion.zip"

Finishing Up

There are a lot of useful things you can do with the migration files.  Making changes is sometimes easier in bulk than one artifact at a time.  Many free tools are available to find and replace text in multiple files, and even use patterns.  Some changes require a bit of hunt and peck if they are common and may occur in more places than you want to change.  If you have an application name that is similar to a database name it gets a little more tedious.  Obviously replacing a database name called plan would likely be more work because plan exists in many places, not just a database name.  Here are some thoughts and uses.

  • Changing the application name
  • Changing database names
  • Updating member names in all forms and rules
  • Finding forms and rules that use members to be removed



Backup Up Essbase Cloud (EssCS) Applications

The good news is migrating to the cloud doesn’t change a lot when it comes to backing up your Essbase applications.  Conceptually, it is the same.  The utilities used are slightly different.

Enter CLI

If you are new to Essbase on the cloud, the CLI, or command line interface, is something you will want to download and configure.  It is a pretty useful utility and easy to use.  I will say that it is new and missing a lot of functionality you may want.  I was just as frustrated using EPMAutomate with PBCS when it came out.  Three years later, however, EPMAutomate is pretty complete.  I am hoping for the same progression with CLI.  For backing up your apps, the CLI will give you everything you need.

Running An LCM Backup

There is really only one command that is a must.  I will get into why I say this in a second.  LcmExport will run an LCM and store it locally, which is a nice bonus.  There is no need for any other commands to download and rename it.

LcmExport has the following parameters.

  • -verbose (or -v) will provide a more complete response description, especially if there is an error
  • -application (or -a) requires an additional parameter that identifies your application name
  • -localdirectory (or -ld) requires an additional parameter that tell the command where to store the backup file
  • -zipfilename (or -z) requires an additional parameter and is the name of the LCM file that everything will be stored locally
  • -threads (or -T) requires an additional parameter equal to the number of threads you want to use to run the backup
  • -skipdata (or -skip) will tell the LCM to ignore the data in the application
  • -overwrite (or -o) is used will tell the process to overwrite the zip file if it exists
  • -password (or -p) requires an additional parameter to send the password to the command

At minimum, the application and zipfilename parameters are required.  The localdirectory and overwrite parameters will likely be used in every call you make as well.

C:\[cli foldername]\esscs.bat lcmExport -a Sample -z Sample.zip -ld c:/temp -o

Why This Isn’t Enough

I have always felt very strongly that data exports should be done because corruption will remain in the pag files until it is fixed, and often times, it isn’t found for days, weeks, or months.  At that point, you can’t export your data and you are in real trouble.  So, I don’t rely solely on the LCM backup strategy.

There are a couple ways to export the data.  With an on-premise implementation you might use Maxl to export the data.  The other option is to write a calculation that does the exports.  The calculations route will provide more option with formatting, the delimiter, and what data is included.  It might be a little slower, but since the inclusion of this option, I have relied on it ever since.

You could integrate Maxl at this point to do the same thing, but the CLI also provides you with the tools to do it if you use a calculation.  At this point, assume a calculation exists named FullExport that exports the data to application and database path on the server with a name of FullExport.txt.

At this point, there are two additional commands that will bulletproof your backup strategy.

C:\[cli foldername]\esscs calc -a Sample -d Basic -s FullExport.csc
C:\[cli foldername]\esscs download -f FullExport.txt -a Sample -d Basic -ld c:/backup -o

Completing The Circle

Normally this would be executed through DOS, or my favorite, PowerShell.  This would allow the dynamic generation of the scripts so they could be reused.  Things like the application name, database name, local path, calc script, and possibly some others, would all be variables.  The result would be something produced similar to this.

C:\cli_utility\esscs login -url https://myEssbase-test-myDomain.analytics.us2.oraclecloud.com/essbase -u kylegoodfriend 
C:\cli_utility\esscs lcmExport  -a Sample -z Sample.zip -ld c:/Backups -o
C:\cli_utility\esscs calc -a Sample -d Basic -s FullExport.csc
C:\cli_utility\esscs download -f FullBackup.txt -a Sample -d Basic -ld c:/Backups -o

I would add a step in my shell to rename this LCM and the data export downloaded with a date and time in the name.

REM Rename the LCM Zip file
ren c:\Backups\Sample.zip Sample_%date:~10,4%%date:~4,2%%date:~7,2%.zip 
REM Rename the data export
ren c:\Backups\FullExport.txt Sample_Backup_%date:~10,4%%date:~4,2%%date:~7,2%.txt 
REM Delete all files older than 30 days
forfiles /p c:\Backups /s /m *.* /D -30 /C "cmd /c del @path"

A Final Note

The calculation script referenced above would look something like this.  There are many options that can be set.  If you aren’t familiar with this, a quick google will get you what you need.

SET DATAEXPORTOPTIONS
{
  DataExportLevel "LEVEL0";
  DataExportDynamicCalc OFF;
  DataExportNonExistingBlocks OFF;
  DataExportRelationalFile ON;
  DataExportOverwriteFile ON;
}
DATAEXPORT "File" "," "/Sample/Basic/FullExport.txt";

As always, post and share. If you have a question, do the same.




Exporting Data in PBCS With Business Rules

Introduction

If your environment is a cloud product, whether it be PBCS or ePBCS, one thing that is critical to understand is the backups produced in the Migration area, may not be what you think.  Learning this after the fact may have negative consequences on your ability to restore data.  In the migration, the Essbase Data section is a copy of the pag, ind, and otl files.  When this is used to restore data, it restored the entire database.  This includes data and metadata.  This may be OK for many situation, but it won’t help you if

  • only specific data is required to be restored
  • specific data has changed and needs to be excluded from the restore
  • corruption exists in the database and all data is required to be restored
  • The pag files that hold the data are not readable
  • The size of the backup is quite large as it includes all data, and upper level data is normally exponentially larger than just level 0 data

Text Data Export

Business Rules can be written to export data to the Inbox/Outbox that is delimited with a few formatting options.  The entire database can be included.  With fix statements, specific data can be isolated.  So, forecast could be exported to a file, plan another, and actuals a third.  Specific accounts, entities, and/or products can be isolated in cases when specific data was inadvertently changed or deleted.  This file is a text file that can be opened in any text editor, Microsoft Excel, a database, or any other application that you open text files to view or manipulate.

Example Business Rule

/* Set the export options */
 SET DATAEXPORTOPTIONS
 {
 DataExportLevel LEVEL0;
 DataExportDynamicCalc OFF;
 DataExportNonExistingBlocks OFF;
 DataExportDecimal 4;
 DataExportPrecision 16;
 DataExportColFormat ON;
 DataExportColHeader Period;
 DataExportDimHeader ON;
 DataExportRelationalFile ON;
 DataExportOverwriteFile ON;
 DataExportDryRun OFF;
 };
 
 FIX(@Relative("Account", 0),
     @Relative("Years", 0),
     @Relative("Scenario", 0),
     @Relative("Version", 0),
     @Relative("Entity", 0),
     @Relative("Period", 0), 
     @Relative("custom_dim_name_1", 0),
     @Relative("custom_dim_name_1", 0),
     @Relative("custom_dim_name_1", 0))

  DATAEXPORT "File" "," "/u03/lcm/filename_xyz.txt" "";

 ENDFIX

Some Hints

There are a few things that you may encounter and be a little confused about, so the following are a few things that might help.

  1. To see the data export, it must be exported to /u03/lcm/, which is the equivalent of your inbox.  Any file name can be used.
  2. Setting DataExportLevel to 0 will export the level 0 blocks, not the level 0 members.  If there are any stored members in any of your dense dimensions, they will be exported unless the dimension is also in the fix to include ONLY level 0 members.
  3. The fix statement works the same as a fix statement in any business rule, so the data to be exported can be easily defined.
  4. My experience exporting dynamic calculated members drastically increases the time of the export.
  5. The export options are all pretty logical.  Some work in conjunction with each other and others are ignored depending on dependent setting values.  These are documented for version 11.1.2.4 here.
  6. This process can be automated with EPM Automate and include the download and time stamp of the backup for later use.

Conclusion

There are benefits to both types of backups.  My preference is to either run both nightly, or run just the Business Rule.  By having both, the administrator has the option of restoring the data as needed, in the way that is most effective.  Having both provides the ultimate flexibility.  If space is an issue, exclude the data option in the Migration and just run the business rule.

 

From Oracle’s Documentation

DataExportLevel ALL | LEVEL0 | INPUT

  • ALL—(Default) All data, including consolidation and calculation results.
  • LEVEL0—Data from level 0 data blocks only (blocks containing only level 0 sparse member combinations).
  • INPUT—Input blocks only (blocks containing data from a previous data load or grid client data-update operation). This option excludes dynamically calculated data. See also the DataExportDynamicCalc option.

In specifying the value for the DataExportLevel option, use these guidelines:

  • The values are case-insensitive. For example, you can specify LEVEL0 or level0.
  • Enclosing the value in quotation marks is optional. For example, you can specify LEVEL0 or “LEVEL0”.
  • If the value is not specified, Essbase uses the default value of ALL.
  • If the value is incorrectly expressed (for example, LEVEL 0 or LEVEL2), Essbase uses the default value of ALL.

Description

Specifies the amount of data to export.

DataExportDynamicCalc ON | OFF

  • ON—(Default) Dynamically calculated values are included in the export.
  • OFF—No dynamically calculated values are included in the report.

Description

Specifies whether a text data export excludes dynamically calculated data.

Notes:

  • Text data exports only. If DataExportDynamicCalc ON is encountered with a binary export (DATAEXPORT BINFILE …) it is ignored. No dynamically calculated data is exported.
  • The DataExportDynamicCalc option does not apply to attribute values.
  • If DataExportLevel INPUT is also specified and the FIX statement range includes sparse Dynamic Calc members, the FIX statement is ignored.

DataExportNonExistingBlocks ON | OFF

  • ON—Data from all possible data blocks, including all combinations in sparse dimensions, are exported.
  • OFF—(Default) Only data from existing data blocks is exported.

Description

Specifies whether to export data from all possible data blocks. For large outlines with a large number of members in sparse dimensions, the number of potential data blocks can be very high. Exporting Dynamic Calc members from all possible blocks can significantly impact performance.

DataExportPrecision n

n (Optional; default 16)—A value that specifies the number of positions in exported numeric data. If n < 0, 16-position precision is used.

Description

Specifies that the DATAEXPORT calculation command will output numeric data with emphasis on precision (accuracy). Depending on the size of a data value and number of decimal positions, some numeric fields may be written in exponential format; for example, 678123e+008. You may consider using DataExportPrecision for export files intended as backup or when data ranges from very large to very small values. The output files typically are smaller and data values more accurate. For output data to be read by people or some external programs, you may consider specifying the DataExportDecimal option instead.

Notes:

  • By default, Essbase supports 16 positions for numeric data, including decimal positions.
  • The DataExportDecimal option has precedence over the DataExportPrecision option.

Example

SET DATAEXPORTOPTIONS
    { 
    DataExportPrecision 6;
    DataExportLevel ALL;
    DataExportColHeader "Measures";
    DataExportDynamicCalc ON;
    };
    DATAEXPORT "File" "," "output1.out";

Initial Data Load Values

"Sales" "COGS" "Margin" "Marketing" "Payroll" "Misc" "Total Expenses" "Profit" "Opening Inventory" "Additions" "Ending Inventory" "Margin %" "Profit %" 
"100-10" "New York"
"Jan" "Actual" 678123456.0 271123456.0 407123456.0 941234567890123456.0 51123456.0 0 145123456.0 262123456.0 2101123456.0 644123456.0 2067123456.0 60123456.029 38123456.6430
"Feb" "Actual" 645123 258123 3871234 9012345 5112345 112345678 14212345 24512345 2067123456 61912345 20411234 601234 37123456.98 
"Mar" "Actual" 675 270 405 94 51 1 146 259 2041 742 2108 60 38.37037037037037 
"Qtr1" "Actual" 1998 799 1199 278 153 2 433 766 2101 2005 2108 60.01001001001001 38.33833833833834

Exported Data Format

"Sales","COGS","Margin","Marketing","Payroll","Misc","Total Expenses","Profit","Opening Inventory","Additions","Ending Inventory","Margin %","Profit %","Profit per Ounce","100-10","New York"
"Jan","Actual",6.78123e+008,2.71123e+008,4.07e+008,9.41235e+017,5.11235e+007,0,9.41235e+017,-9.41235e+017,2.10112e+009,6.44123e+008,2.06712e+009,60.0186,-1.388e+011,-7.84362e+016
"Feb","Actual",645123,258123,387000,9.01235e+006,5.11235e+006,1.12346e+008,1.2647e+008,-1.26083e+008,2.06712e+009,6.19123e+007,2.04112e+007,59.9886,-19544.1,-1.05069e+007
"Mar","Actual",675,270,405,94,51,1,146,259,2041,742,2108,60,38.3704,21.5833

DataExportDecimal n

Where n is a value between 0 and 16.

If no value is provided, the number of decimal positions of the data to be exported is used, up to 16 positions, or a value determined by the DataExportPrecision option if that is specified.

Description

Specifies that the DATAEXPORT calculation command will output numeric data with emphasis on legibility; output data is in straight text format. Regardless of the number of decimal positions in the data, the specified number is output. It is possible the data can lose accuracy, particularly if the data ranges from very large values to very small values, above and below the decimal point.

Notes:

  • By default, Essbase supports 16 positions for numeric data, including decimal positions.
  • If both the DataExportDecimal option and the DataExportPrecision option are specified, the DataExportPrecision option is ignored.

Example

SET DATAEXPORTOPTIONS
    {DataExportDecimal 4;
    DataExportLevel "ALL";
    DataExportColHeader "Measures";
    DataExportDynamicCalc ON;
    };
    DATAEXPORT "File" "," "output1.out";

Initial Data Load Values

"Sales" "COGS" "Margin" "Marketing" "Payroll" "Misc" "Total Expenses" "Profit" "Opening Inventory" "Additions" "Ending Inventory" "Margin %" "Profit %" 
"100-10" "New York"
"Jan" "Actual" 678123456.0 271123456.0 407123456.0 941234567890123456.0 51123456.0 0 145123456.0 262123456.0 2101123456.0 644123456.0 2067123456.0 60123456.029 38123456.6430
"Feb" "Actual" 645123 258123 3871234 9012345 5112345 112345678 14212345 24512345 2067123456 61912345 20411234 601234 37123456.98 
"Mar" "Actual" 675 270 405 94 51 1 146 259 2041 742 2108 60 38.37037037037037 
"Qtr1" "Actual" 1998 799 1199 278 153 2 433 766 2101 2005 2108 60.01001001001001 38.33833833833834

Exported Data Format

"Sales","COGS","Margin","Marketing","Payroll","Misc","Total Expenses","Profit","Opening Inventory","Additions","Ending Inventory","Margin %","Profit %","Profit per Ounce"
"100-10","New York"
"Jan","Actual",678123456.0000,271123456.0000,407000000.0000,941234567890123520.0000,51123456.0000,0.0000,941234567941246980.0000,-941234567534246910.0000,2101123456.0000,644123456.0000,2067123456.0000,60.0186,-138799883591.4395,-78436213961187248.0000
"Feb","Actual",645123.0000,258123.0000,387000.0000,9012345.0000,5112345.0000,112345678.0000,126470368.0000,-126083368.0000,2067123456.0000,61912345.0000,20411234.0000,59.9886,-19544.0820,-10506947.3333
"Mar","Actual",675.0000,270.0000,405.0000,94.0000,51.0000,1.0000,146.0000,259.0000,2041.0000,742.0000,2108.0000,60.0000,38.3704,21.5833

Output Format Options

DataExportColFormat ON | OFF

  • ON—The data is output in columnar format.
  • OFF—Default. The data is output in non-columnar format.

Description

Specifies if data is output in columnar format. Columnar format displays a member name from every dimension; names can be repeated from row to row, enabling use by applications other than Essbase tools. In non-columnar format, sparse members identifying a data block are included only once for the block. Non-columnar export files are smaller, enabling faster loading to an Essbase database.

Notes

Do not use the DataExportColFormat option in combination with the DataExportRelationalFile option, which already assumes columnar format for files destined as input files to relational databases.

Example

SET DATAEXPORTOPTIONS
 {
 DATAEXPORTCOLFORMAT ON;
 };
 FIX("100-10", Sales, COGS, Jan, Feb, Mar, Actual, Budget)
 DATAEXPORT "File" "," "d:\temp\test2.txt" ;
ENDFIX;

DataExportColHeader dimensionName

Description

Specifies the name of the dense dimension that is the column header (the focus) around which other data is referenced in the export file. Use the DataExportColHeader option only when you export data to a text file. For example, if from Sample Basic the Year dimension is specified, the output data starts with data associated with the first member of the Year dimension: Year. After all data for Year is output, it continues with the second member: Qtr1, and so on.

Notes

MaxL, ESSCMD, and Essbase exports do not provide a similar capability. With these methods, Essbase determines the focal point of the output data.

Exporting through Report Writer enables you to specify the header in the report script.

Example

SET DATAEXPORTOPTIONS {DATAEXPORTCOLHEADER Scenario;};

Specifies Scenario as the page header in the export file. The Scenario dimension contains three members: Scenario, Actual, and Budget. All Scenario data is shown first, followed by all Actual data, then all Budget data.

DataExportDimHeader ON | OFF

  • ON—The header record is included.
  • OFF—Default. The header record is not included.

Description

Use the DataExportDimHeader option to insert the optional header record at the beginning of the export data file. The header record contains all dimension names in the order as they are used in the file. Specifying this command always writes the data in “column format”.

Example

SET DATAEXPORTOPTIONS 
 {
 DATAEXPORTLEVEL "ALL"; 
 DATAEXPORTDIMHEADER ON; 
 };
FIX("100-10", "New York", "Actual")
 DATAEXPORT "File" "," "E:\temp\2222.txt" ;
ENDFIX;

Specifying the DataExporttDimHeader ON option while exporting Sample Basic writes the data in column format, with common members repeated in each row. The data begins with a dimension header, as shown in the first two rows of the example file below:

"Product","Market","Year","Scenario","Measures"
"Sales","COGS","Marketing","Payroll","Misc","Opening Inventory","Additions","Ending Inventory"
"100-10","New York","Jan","Actual",678,271,94,51,0,2101,644,2067
"100-10","New York","Feb","Actual",645,258,90,51,1,2067,619,2041
"100-10","New York","Mar","Actual",675,270,94,51,1,2041,742,2108
"100-10","New York","Apr","Actual",712,284,99,53,0,2108,854,2250
"100-10","New York","May","Actual",756,302,105,53,1,2250,982,2476
"100-10","New York","Jun","Actual",890,356,124,53,0,2476,1068,2654
"100-10","New York","Jul","Actual",912,364,127,51,0,2654,875,2617
"100-10","New York","Aug","Actual",910,364,127,51,0,2617,873,2580
"100-10","New York","Sep","Actual",790,316,110,51,1,2580,758,2548
"100-10","New York","Oct","Actual",650,260,91,51,1,2548,682,2580
"100-10","New York","Nov","Actual",623,249,87,51,0,2580,685,2642
"100-10","New York","Dec","Actual",699,279,97,51,1,2642,671,2614

DataExportRelationalFile ON | OFF

  • ON—The output text export file is formatted for import to a relational database.
    • Data is in column format; sparse member names are repeated. (The DataExportColFormat option is ignored.)
    • The first record in the export file is data; no column heading or dimension header is included, even if specified. (The DataExportColHeader and DataExportDimHeader options are ignored.)
    • Missing and invalid data is skipped, resulting in consecutive delimiters (commas) in the output. The optional “missing_char” parameter for DATAEXPORT is ignored
  • OFF—Default. The data is not explicitly formatted for use as input to a relational database.

Description

Using the DataExportRelationalFile option with DATAEXPORT enables you to format the text export file to be used directly as an input file for a relational database.

Example

SET DATAEXPORTOPTIONS {
 DataExportLevel "ALL";
 DataExportRelationalFile ON;
};

FIX (Jan)
 DATAEXPORT "File" "," c:\monthly\jan.txt
ENDFIX;

Processing Options

DataExportOverwriteFile ON | OFF

  • ON—The existing file with the same name and location is replaced.
  • OFF—Default. If a file with the same name and location already exists, no file is output.

Description

Manages whether an existing file with the same name and location is replaced.

DataExportDryRun ON | OFF

  • ON—DATAEXPORT and associated commands are run, without exporting data.
  • OFF—Default. Data is exported

Description

Enables running the calculation script data export commands to see information about the coded export, without exporting the data. When the DataExportDryRun option value is ON, the following information is written to the output file specified in the DATAEXPORT command:

  • Summary of data export settings
  • Info, Warning, and Error messages
  • Exact number of blocks to be exported
  • Estimated time, excluding I/O time.

Notes

  • The DataExportDryRun option does not work with exports to relational databases.
  • If you modify the script for reuse for the actual export, besides removing the DataExportDryRun option from the script you may want to change the name of the export file.

Example

SET DATAEXPORTOPTIONS 
 {
 DataExportLevel "ALL";
 DataExportColHeader "Measures";
 DataExportColFormat ON;
 DataExportDimHeader ON;
 DataExportDynamicCalc OFF;
 DataExportDecimal 0;
 DataExportDryRun ON;
 DataExportOverwriteFile ON;
 };

FIX("Qtr1")
 DATAEXPORT "File" "," "E:\temp\log.txt" ;
ENDFIX;



Updating EPM Automate Just Got Easier

Introduction

One of the challenges with EPM Automate has been eliminated this month.  Although it was a minor issue, the need to update EPM Automate regularly was something that had to be considered monthly.  Administrators of PBCS do not always have access to the on-premise footprint, like a Windows VM, that runs the automation.  Even more frequently, access to the production VM is only available to IT staff, so updating that environment is more strict, and has to be scheduled.  That schedule doesn’t always sync up to the changes in PBCS.

Update Command

As of the 02.18 release, a new command is available.  The “update” command will automatically download, and silently install, the newest version of the EPM Automate utility.  Once logged in, execute the following command.

epmautomate upgrade

If you are a frequent visitor, you know I am a fan of PowerShell.  All the automation I do with EPM Automate in the Windows environment utilizes this free scripting tool.  This command has been added to all my new projects so there is no manual effort in keeping the utility current.  This also eliminates any issues that pop up due to incompatibility issues with PBCS.

In my reusable scripts, this new function has been added.

function EPMA_Upgrade{
  $CmdLine = "upgrade"
  $ReturnCode = Start-Process "$EPMAutomate_Path\epmautomate.bat" $CmdLine -Wait -passthru -WindowStylee $ShowDosWindow
  LogResult "EPM Automate has been updated" $ReturnCode.ExitCode
}

During a nightly process, the function is referenced.  If the request fails, the administrators are emails.

$ReturnResult = EPMA_Upgrade
if( $ReturnResult -eq 1){Send_Email_Error;Exit}

Summary

This is a welcome addition.  Now, administrators and application owners don’t have to worry about using new features or keeping EPM Automate in sync with the active version of PBCS.  As the great, Forrest Gump would say – “One Less Thing.”




Easy Way to Randomize Data in PBCS

Introduction

When an application is used to demonstrate sensitive information, a training class includes people that shouldn’t see live data, or security is being tested, often times using real data is not an option.  I have written PowerShell scripts and .NET applications to randomize data from Essbase exports, as well as Custom Defined Functions to randomize with calculations.  PBCS just made it much easier.  Using one EPMAutomate command all the data in all the applications can be randomized.  We don’t have the ability to control what is change, which would be a nice addition as I always excluded anything greater than 0 and less than or equal to 1 in an effort to exclude rates and things like a boolean true/false, or headcount.  But, the ability to do it in one command is mighty appreciated!

Masking Data in PBCS

In a recent update to EPM Automate, Oracle introduced a new method – maskData. It can’t get any easier to use.  Once you log in to EPM Automate, simply run maskData.

epmautomate maskData

When this is executed, it will ask you to confirm that you want to randomize the data in the application.  If you want to bypass the confirmation, add -f after maskData and it will force the command to run without confirmation.

Thank You Mr. Obvious

Be very careful using this!  Make sure you are logged on to test, and not production!




My Adventures in Groovy Calculations – Part 1

What Is Groovy

Recently, Groovy scripting was added to ePBCS business rules as an option instead of the GUI, or the go-to scripting for you old-timers who still refuse to change.  These are defined in the Business Rule editor as Groovy calculations.  So, what is Groovy?

“Apache Groovy is an object-oriented programming language for the Java platform. It is a dynamic language with features similar to those of Python, Ruby, Perl, and Smalltalk. It can be used as a scripting language for the Java Platform, is dynamically compiled to Java virtual machine (JVM) bytecode, and interoperates with other Java code and libraries. Groovy uses a Java-like curly-bracket syntax. Most Java code is also syntactically valid Groovy, although semantics may be different.”

If you haven’t heard of Groovy, you may want to do some research.  Oracle is using more and more Groovy in applications as administrative options and a communication method between applications.  Groovy is a standard and can be used with millions of applications and websites with the REST API.

What Groovy Script/Calculations Are Not

Groovy calculations are not java-based calculations.  It is not a new calculation language.  It does provide a way to interact with a Data Form in ePBCS and build a calculation script dynamically.  So, Groovy, in the context of Groovy Calculation Scripts, does not connect to Essbase via Groovy Business Rules. It simply builds a string that is sent to Essbase as a calculation.  It does, however, interact with Planning and that is where the power starts.  With the ability to have all the Groovy functionality to manipulate strings and now the ability to interact with the data form, dynamic calculations can be built.  The calculation script sent to Essbase is no different, but the script can now be dynamically generated based on things like, the POV, the text value of a Smart List, whether the values in the grid were updated, whether the data entered meets validation criteria, and other similar things.

If you are experienced with Hyperion Planning, you may have dabbled with JavaScript to do data validation, calculate data prior to the user submitting it, or prevent users from submitting data.  It was a great option to provide feedback to users, but that basically was useless when Smart View allowed users to open Data Forms in Excel. The JavaScript did nothing unless the form was opened in an internet browser.

Getting Started

The first step in creating a Groovy Calculation Script is to, well, create one.  To do that, create a new business ruleChange the view from Designer to Edit Script.  If you haven’t noticed this before, it provides a way to toggle the GUI to a script view.

Next, find the drop-down box in the toolbar to the far right named Script Type.  This option will read Calc Script.  Change it to Groovy Script.

AAt this point, the script window is now set to validate Groovy script, not Essbase syntax. Even though it doesn’t do anything yet, you have just created your first Groovy Business Rule!

Use Cases

There is a lot of potential in this functionality.  To get you thinking, here are some examples:

  1. Execute calculations on large sparse dimensions on ONLY the members that changed on the form.
  2. Access the Smart List text to do validation, use in calculations, and store for later use in Essbase (maybe save a member name in a member that is numeric, like employee ID, Cost Center, or account).
  3. Perform validation before the calculation is built and sent to Essbase. For example, if the sum of a column used to allocate dollars doesn’t sum to 100, send a calculation that ONLY returns a message and doesn’t perform the allocation.
  4. Perform text manipulation previously done in Essbase with functions.  Concatenating member names and truncating member name prefixes and date formats are some of the few I use regularly.  Many of these functions are extremely slow and force the calculation to execute in serial mode, so to be able to do them outside the script is now an option.

Real World Example

The Problem

I am working with a client who wants to override the result of driver-based calculations based on historical trends.  In this example, the volume of cases can be changed and the profit rate can be adjusted.  Once the form is saved, the overrides need to be removed.

Here-in lies the challenge.  If the overrides are removed and the calculation runs on all members in the form, the results would revert back to what they were prior to the override because the override values no longer exist in the database or Data Form.  So, rather than perform the calculation on the override, it would use #missing  or zero, and take the results right back to what the drivers dictated.  The most obvious way around this issue is to execute the calculation on ONLY the rows (vendors in this example) that were edited. In other words, dynamically generate the FIX statement on the vendors that were updated.

The Non-Groovy FIX Statement

Without Groovy, the FIX statement would include @RELATIVE(“Vendor”,0) to run the calculation on all venders on the Data Form.  This has 2 issues.  One, it calculates all the vendors and will change the vendors back to the pre-override values.  Two, every time the user saves the form, the fix is traversing through 30,000 possible vendors.  Although most companies have less than 8,000 active vendors, it still poses a performance issue calculating 8,000 blocks when only a few typically change.

The only aspect of the calculation that is going to change in this situation is the FIX statement, so that will be the only piece shown in the comparison between a Groovy script and a non-Groovy script.

FIX(&vScenario,
    &vVersion,
    &vCompany,
    &vYear,
    "Local",
    "Input",
    @RELATIVE(“Vendor”,0),
    "Jan":"Dec",
    "Regular_Cases")

The Groovy FIX Statement

Since Groovy can dynamically create the calculation script, it looks more like the example below. The sPov will be a string variable in Groovy that holds all the members in the data form’s POV.  The sVendors Groovy variable will hold the list of vendors that have been edited.

FIX($sPOV
    $sVendors,
    "Jan":"Dec",
    "Regular_Cases")

@RELATIVE(“Vendor”,0), which would produce a list of every vendor in the hierarchy, is replaced with “V300000300040003”, “V300000300060001”, “V300000300070002”.

The issue of running the calculation on vendors that have not been edited has now been solved.  An added benefit is that the calculation runs on 3 of the 8,000 blocks, so what took 30 seconds now completes in under a second.

Now, The Interesting Part

Let’s dissect the Groovy calculation script piece by piece.

Setting The Stage

For Groovy to perform operations, there are a few housekeeping items that need to be addressed.  First, a few string builders need to be created to store some variables of strings that grow through the process and are concatenated to Essbase calculation before it is submitted for processing.

There are some variables used to interact with the form’s data grid.  For easy reference to the grid through the script, the grid object is stored in a variable (curgrid).  Next, a variable is created to hold the result of the cells that have been edited (itr).    The likelihood that these variables would exist in most of the scripts is high, so it might make sense to get familiar with these objects and their parameters.

//Get current Data Form
DataGrid curgrid = operation.getGrid()

// Construct a string builder
// Holds the calculation script sent to Essbase
StringBuilder scriptBldr = StringBuilder.newInstance()

// Holds the value for the venders that have changed
StringBuilder vendorList = StringBuilder.newInstance()
String sVendors

// Iterater which gives you only the edited cells
GridIterator itr = curgrid.getDataCellIterator(PredicateUtils.invokerPredicate("isEdited"))

// Holds the list of members from the POV – the function returns an array, so this
// parsed the array and places quotes around each member and separates them with a comma
String sPov = '"' + curgrid.getPov().essbaseMbrName.join(',').replaceAll(',','","') + '"'

At this point the values of the variables are as follows.

Find the Vendors That Have Changed

We know the users will enter overrides in this Data Form (Case Growth and Average Price).  The following piece of the Groovy script will build a delimited list of those vendors based on the rows that have been edited.  It will include quotes around the member names to account for any member names that are numeric or have special characters and will be separated by a comma.  Groovy provides the ability to append to a string with <<”””, and close it with “””.  The if statement ensures that a vendor will not be appended to the string if multiple columns are changed.

// Loop through each cell that was edited and build the vendor list
// If multiple cells on the same row are edited, only add vendor once
itr.each{ DataCell cell ->
  sVendors = cell.getMemberName("Vendor")
  if(vendorList.indexOf(sVendors) < 0){
    vendorList <<"""
   ,"$sVendors"
   """
  }
}

At this point, only a few variables have changed. The bulk of the Groovy functionality is finished.  We now have the POV and the list of vendors that need to be in the FIX statement.

The Essbase Calculation

The next section will append text to the scriptBldr string.  This string will ultimately be sent to Essbase as the calculation to be performed.  Groovy variables are embedded and replaced with the value that they were set to previously.  The two used in this calculation are $vendorList and $sPOV.  Other than those two pieces, everything else is pulled from the original Business Rule and highlighted in red below.

// Add the calculation defined in a business rule to the string variable
// the POV and Vendor List will be used to dynamically set the FIX statement
scriptBldr <<"""
VAR v_Price;

FIX($sPOV
    $vendorList,
    "Jan":"Dec",
    "Regular_Cases")

  /* Calculate Overrides */
  "OEP_Working"(
  v_Price = "Avg_Price/Case"->"YearTotal";

  "Regular_Cases" = (1 + "Case_Growth_Rate"->"BegBalance") * 
                    ("Regular_Cases"->"FY16"->"Final");
  IF("Avg_Price/Case_Inp"->"BegBalance" == #Missing)
    "Net_Sales" = (v_Price) * (1 + "Case_Growth_Rate"->"BegBalance") * 
                  "Regular_Cases"->"FY16"->"Final";
  ELSE
    "Net_Sales" = ("Avg_Price/Case_Inp"->"BegBalance") * 
                  (1 + "Case_Growth_Rate"->"BegBalance") *
                  "Regular_Cases"->"FY16"->"Final" ;
  ENDIF

  IF("GP_2_%_Inp"->"BegBalance" == #Missing)
    "GP_Level_2" = ("GP_Level_2_%"->"YearTotal"->"FY16"->"Final") * "Net_Sales" ;
  ELSE
    "GP_Level_2" = ("GP_2_%_Inp"->"BegBalance") * "Net_Sales" ;
  ENDIF
  )
ENDFIX

FIX($sPOV
    $vendorList)

  CLEARDATA "Avg_Price/Case_Inp"->"BegBalance";
  CLEARDATA "GP_2_%_Inp"->"BegBalance";
  CLEARDATA "Case_Growth_Rate"->"BegBalance";
ENDFIX
"""

At this point, the scriptBldr variable is a complete Essbase calculation that can be validated in any Business Rule.

Finishing UP

The last thing required is to send the calculation text built above to Essbase.

println scriptBldr // Sends the script to the log
return scriptBldr // Sends the script to Essbase

Verifying What Was Sent To Essbase

When the Data Form is saved, the results in the form can be validated back to the logic to verify that the calculation worked as expected.  Regardless of whether the calculation executes with or without failing, the value of scriptBldr ( calculation sent to Essbase) is captured in the Job console.

In the Job console, click the Job Status link.  This includes the value of the scriptBldr variable.  The text can be copied from this window, and if it failed to execute, can be copied into a Business Rule and validated there to find the issue.

Wrapping Up

I will admit that I am not a Java programmer, so I am still educating myself on the potential this affords developers.  I am struggling to digest the API documentation and to truly understand the depth of the possibilities. I do know this opens up a whole world we didn’t have with Hyperion Planning. I plan on learning and using Groovy calculations more and more because of the possibilities it provides.  Look for more examples and knowledge sharing as I get my hands around the API and integrate this into more delivery solutions.  To get future publications, sign up to be notified about new posts and articles at www.in2hyperion.com.




Remove Dimensions From Planning LCM Extracts

Problem

I am currently working with a client that is updating a planning application and one of the changes is to remove a dimension.  After the new application was setup and the hierarchies were modified to meet the objectives, migrating artifacts was the next step.  As many of you know, if you try to migrate web forms and composite forms, they will error during the migration due to the additional dimension in the LCM file.  It wouldn’t be a huge deal to edit a few XML files, but when there are hundreds of them, it is extremely time consuming (and boring, which is what drove me to create this solution).

Assumptions

To fully understand this article, a basic understanding of XML is recommended.  The example below assumes an LCM extract was run on a Planning application and it will be used to migrate the forms to the same application without a CustomerSegment dimension.  It is also assumed that the LCM extract has been downloaded and decompressed.

Solution

I have been learning and implementing PowerShell scripts for the last 6 months and am overwhelmed by how easy it is to complete complex tasks.  So, PowerShell was my choice to modify these XML files in bulk.

It would be great to write some long article on how smart this solution is and overwhelm you with my whit, but there is not much too it.  A few lines of PowerShell will loop through all the files and remove the XML tags related to a predefined dimension.  So, let’s get to it.

Step 1 – Understand The XML

There are two folders of files we will look to.  Forms are under the plan type and the composite forms are under the global artifacts.  Both of these are located inside the resource folder.  If there are composite forms that hold the dimension in question as a shared dimension, both will need to be impacted.  Scripts will be included to update both of these areas.

Inside each of the web form files will be a tag for each dimension, and it will vary in location based on whether the dimension is in the POV, page, column, or row.  In this particular example, the CustomerSegment dimension is in the POV section.  What we want to accomplish is removing the <dimension/> tag where the name attribute is equal to CustomerSegment.

For the composite forms, the XML tag is slightly different, although the concept is the same.  The tag in composite form XML files is <sharedDimension/> and the attribute is dimension, rather than name.

Step 2 – Breaking Down the PowerShell

The first piece of the script is just setting some environment variables so the script can be changed quickly so that it can be used wherever and whenever it is needed.  The first variable is the path of the Data Forms folder to be executed on.  The second is the dimension to be removed.

# Identify the source of the Data Forms folder and the dimension to be removed
# List all files, recursively, that exist in the path above
$files = Get-ChildItem $lcmSourceDir -Recurse | 
where {$_.Attributes -notmatch 'Directory'} |

The next piece of the script is recursing through the folder and storing the files in an array.  There is a where statement to exclude directories so the code only executes on files.

# List all files, recursively, that exist in the path above
$files = Get-ChildItem $lcmSourceDir -Recurse | 
where {$_.Attributes -notmatch 'Directory'} |
Step 3 – Removing The Unwanted Dimension

The last section of the script does most of the work.  This will loop through each file in the $files array and

  1. Opens the file
  2. Loops through all tags and deletes any <dimension/> tag with a name attribute with a value equal to the $dimName variable
  3. Saves the file
# Loop through the files and find an XML tag equal to the dimension to be removed
Foreach-Object {

$xml = Get-Content $_.FullName
$node = $xml.SelectNodes(“//dimension”) |
Where-Object {$_.name -eq $dimName} | ForEach-Object {
# Remove each node from its parent
[void][/void]$_.ParentNode.RemoveChild($_)
}
$xml.save($_.FullName)
Write-Host “($_.FullName) updated.”
}

Executing The Logic On Composite Forms

The above concepts are exactly the same to apply the same logic on composite forms files in the LCM.  If this is compared to the script applied to the web forms files, there are three differences.

  1. The node, or XML tag, that needs to be removed is called sharedDimension, not dimension. (highlighted in red)
  2. The attribute is not name in this instance, but is called dimension.  (highlighted in red)
  3. We have added a counter to identify whether the file has the dimension to be removed and only saves the file if it was altered.  (highlighted in green)
The Script
$lcmSourceDir = "Z:\Downloads\KG04\HP-SanPlan\resource\Global Artifacts\Composite Forms"
$dimName = "CustomerSegment"
# List all files
$files = Get-ChildItem $lcmSourceDir -Recurse | where {$_.Attributes -notmatch 'Directory'} |
# Remove CustomerSegment
Foreach-Object {
  # Reset a counter to 0 - used later when files is saved
  $fileCount = 0

$xml = Get-Content $_.FullName
$node = $xml.SelectNodes(“//sharedDimension“) | Where-Object {$_.dimension -eq $dimName}  | ForEach-Object {
#Increase the counter for each file that matches the criteria
    $fileCount++
# Remove each node from its parent
[void][/void]$_.ParentNode.RemoveChild($_)
}
# If the dimension was found in the file, save the updated contents.
  if($fileCount -ge 1) {
$xml.save($_.FullName)
Write-Host “$_.FullName updated.”
    }
}

Summary

The first script may need to be run on multiple plan types, but the results is an identical folder structure with altered files that have the identified dimension removed.  This can be zipped and uploaded to Shared Services and used to migrate the forms to the application that has the dimension removed.

The scripts above can be copied and pasted into PowerShell, or the code can be Downloaded.