Data Validation Rules in Planning 11.1.2.x

Goodbye to the days of JavaScript in order to enforce data input policies and rules to Planning web forms.  With Planning version 11.1.2 and newer, Oracle has introduced a powerful set of tools for data validation within the Planning Data Form Designer itself.  Let’s walk through a scenario of how this works.

Say that we have a product mix form that will be used to input percentages as drivers for a revenue allocation.  Here’s what the form looks like:

We should expect that the sum of these percentages to be 100% at the “Electronics” parent member.  If this is not the case, the revenue allocation will incorrectly allocate data across products.  So how do we enforce this rule?  Simple… let’s take a look at the data form design.

As a row definition we’ve included two member selections; 1) Descendants(Seg01) or Descendants(Electronics) and 2) Seg01 or Electronics.  We are going to add a validation rule to row 2 of the data form.  To do this, highlight row 2 and click the sign to add a new validation rule.  Notice that in the validation rules section, it now says ‘Validation Rules: Row 2’.

The Data Validation Rule Builder will then be launched. Let’s fill in the rule.  We should ensure the Location is set to ‘Row 2’.  We’ve filled in a name and quick description, then ensured that the ‘Enable validation rule’ check box is checked.

For the rule we’ve defined some simple if logic:

IF [Current Cell Value] != [Value = 1] THEN [Process Cell] ;

To define what occurs if this condition is met we choose the ‘Process Cell’ action defined by the small gear with a letter A next to it.  Here we will highlight the cell red and notify the user with a validation message.

We click through to save the Process Cell definition and the Validation Rule itself and should now see the rule in the data form definition.

So let’s take a look at how the end user will interact with this form.  Percentages are entered by product for each month.  Upon save, notice that all months for Electronics that equal 100% appear normal.  December only sums to 90% and is highlighted in red as we specified in the data validation rule.  We cannot limit the user’s ability to save the form until the cell equals 100%; we can only notify them of the issue, and explain the cause and potential resolutions.

Of course, this is a simple example of what can be done using Planning’s Data Validation Rules.  The possibilities are endless.  Oracle has more scenario walkthroughs in the Planning Administrator’s Guide.  View them here: http://download.oracle.com/docs/cd/E17236_01/epm.1112/hp_admin/ch08.html




Optimizing Your Data Load Improves More Than You Think

The format of the data that is loaded to Essbase is often an after-thought.  But, should it be?  When requesting the data file from a source system, it is more important than you may think to have it sorted to mirror your outline.

Assume an outline has the following dimensions.

  • Period [DENSE]
  • Account [DENSE]
  • Region [SPARSE]
  • Category [SPARSE]
  • Product [SPARSE]
  • Organization [SPARSE]

The most efficient way to receive a data file would be to have it sorted by Organization, Product, Category, Region, and then Account.  Data files load faster when the columns that hold the sparse members are sorted in reverse order of the sparse dimensions that exist in the outline.

The reason the data loads faster is because it opens a block of data only one time.  If the data was sorted by the dense members first, then every block would have to be opened multiple times.  If the same sparse member combinations have 3,000 dense members with data, the block would be opened up to 3,000 times.

There are some more important benefits of doing this, however.  When the block is opened multiple times, the database becomes far more fragmented than it needs to be.   Fragmentation causes calculations to be slower and retrieving data can also be impacted, which can lead to frustrated customers.

By not sorting the data when loaded, every time a data load occurs, any performance issues that may exist are exacerbated.  So, anytime possible, sort the data load files by the last sparse dimension in the outline, the second to last sparse dimension in the outline, and so on.  You may be presently surprised at the benefits.




Is My Essbase Calculation Seeing Deja Vu All Over Again?

Everybody knows the quickest way from point A to point B is a straight line.  Everybody assumes that the path is traveled only one time – not back and forth, over and over again.  I see a lot of Essbase calculations and business rules, from experienced and novice developers, that go from point A to point B taking a straight line.  But, the calculation travels that line multiple times and is terribly inefficient.

Here is a simple example of a calculation.  Assume the Account dimension is dense, and the following members are all members in the Account dimension.  We will also assume there is a reason to store these values rather than making them dynamic calc member formulas.  Most of these are embedded in a FIX statement so the calculation only executes on the appropriate blocks.  To minimize confusion, this will not be added to the example.

Average Balance = (Beginning Balance  Ending Balance)  / 2;
Average Headcount = (Beginning Headcount   Ending Headcount) / 2;
Salaries = Average Headcount * Average Salaries;
Taxes = Gross Income * Tax Rate;

One of the staples of writing an effective calculation is to minimize the number of times a single block is opened, updated, and closed.  Think of a block as a spreadsheet, with accounts in the rows, and the periods in the columns.  If 100 spreadsheets had to be updated, the most efficient way to update them would be to open one, update the four accounts above, then save and close the spreadsheet (rather than opening/editing/closing each spreadsheet 4 different times for each account).

I will preface by stating the following can respond differently in different version.  The 11.1.x admin guide specifically states the following is not accurate.  Due to the inconsistencies I have experienced, I always play it safe and assume the following regardless of the version.

You might be surprised to know that the example above passes through every block four times.  First, it will pass through all the blocks and calculate Average Balance.  It will then go back and pass through the same blocks again, calculating Average Headcount.   This will occur two more times for Salaries and Taxes.  This is, theoretically, almost 4 times slower than passing through the blocks once.

The solution is very simple.  Simply place parenthesis around the calculations.

(
Average Balance = (Beginning Balance  Ending Balance)  / 2;
Average Headcount = (Beginning Headcount   Ending Headcount) / 2;
Salaries = Average Headcount * Average Salaries;
Taxes = Gross Income * Tax Rate;
)

This will force all four accounts to be calculated at the same time.  The block will be opened, all four accounts will be calculated and the block will be saved.

If you are new to this concept, you probably have done this without even knowing you were doing it.  When an IF statement is written, what follows the anchor?  An open parenthesis.  And, the ENDIF is followed by a close parenthesis.  There is your block!

"East"
(IF(@ISMBR("East"))
    "East" = "East" * 1.1;
ENDIF)

I have seen this very simple change drastically improve calculations.  Go back to a calculation that can use blocks and test it.  I bet you will be very pleased with the improvement.




Improving The User Experience with Global Rates

 

Almost every planning or forecasting application will have some type of allocation based on a driver or rate that is loaded at a global level.  Sometimes these rates are a textbook example of moving data from one department to another based on a driver, and sometimes they are far more complicated. Many times, whether it is an allocation, or a calculation, rates are entered (or loaded) at a higher level than the data it is being applied to.

A very simple example of this would be a tax rate.  In most situations, the tax rate is loaded globally and applied to all the departments and business units (as well as level 0 members of the other dimensions).  It may be loaded to “No Department”, “No Business Unit”, and a generic member in the other custom dimensions that exist.

If a user needs the tax rate, in the example above, they have to pull “No Department” and “No Business Unit.”  Typically, users don’t want to take different members in the dimension to get a rate that corresponds to the data (Total Department for taxes, and No Department for the rate).  They want to see the tax rate at Total Department, Total Business Unit, and everywhere in-between.

There are a number of ways to improve the experience for the user.  An effective solution is to have two members for each rate.  One is stored and one is dynamic.  There is no adverse effect on the number of blocks, or the block size.  The input members can be grouped in a hierarchy that is rarely accessed, and the dynamic member can be housed in a statistics hierarchy.

Using tax rate in the example above, create a “Tax Rate Input” member.  Add this to a hierarchy called “Rate Input Members”.  Any time data is loaded for the tax rate; it is loaded to Tax Rate Input, No Department, No Business Unit, etc.  Under the statistics/memo hierarchy, create a dynamic member called “Tax Rate”.  “Tax Rate” would be the member referenced in reports.  The formula for this includes a cross-dimensional reference to the “Tax Rate Input” member, and would look something like this.

“No Department”->”No Business Unit”->”Tax Rate Input”;

When a user retrieves “Tax Rate”, it always returns the rate that is loaded to “No Department,” “No Business Unit,” and “Tax Rate Input,” no matter what department or business unit the report is set to.  The effort involved in creating reports in Financial Reporting or Smart View now becomes easier!

There is an added bonus for the system administrators.  Any calculation that uses the rate (you know, the ones with multi-line cross-dimensional references to the rates) is a whole lot easier to write, and a whole lot easier to read because the cross-dimensional references no longer exist.

Before you move the application to production, make sure to set the input rates consolidation method to “Never.”  Don’t expect this change to make great improvements in performance, but it will cause the aggregations to ignore these members when consolidating the hierarchies.  A more important benefit is that users won’t be confused if they ever do look at the input rates at a rolled up level.  The ONLY time they would see the rate would be at level 0, and would be an accurate reflection of the rate.

Note:  It is recommended to create member names without spaces.  The examples above ignored this rule in an effort to create an article that is more readable.




What’s New in Hyperion EPMA 11.1.2?

What’s New in Hyperion 11.1.2?

EPMA

The release of version 11.1.2 has brought a plethora of improvements to the entire Hyperion suite of products, and EPMA is no different. This post will cover some of the significant changes that were included.

Improved Support for Essbase

This release has provided several updates that increase the functionality of EPMA as it relates to Essbase. Some of the more important ones include:
  1. Utilizing the Reorder Children dialog box, a new sort order can now be created to reorder members in the hierarchy.
  2. Performance settings for dimensions can now be modified in EPMA
  3. Dynamic Time Series (DTS) is now supported on the period dimension (BSO cubes)
  4. The ability to add Typed Measures and members with a Date Format has also been included.
    1. Varying Attributes are still not supported in this release

Application Troubleshooting Support

As we all know, EPMA can occasionally become out of sync with the dimension library or one of the products to which we are trying to push metadata. A new application diagnostic feature has been added in this release to help users fix this issue. This diagnostic tool determines inconsistencies between the source and target. Once the inconsistencies have been discovered, they can either be corrected manually or dealt with automatically.

Financial Management Copy Application Utility

HFM supports the ability to copy an EPMA app using the Copy Application Utility. This can be done two different ways:
  1. Select the Financial Management app. It will then be copied as a Classic application. Once this has been done, the EPMA upgrade feature can be uses
  2. Alternatively, the LCM tool can be used to migrate the application. Once this is done, the Copy Application Utility can be utilized to move the data.

Batch Client

 This release includes a couple of adjustments to the batch client that improve the automation process.
  1. Login through a proxy is now supported
  2. Single Sign On (SSO) login is also supported
Follow the link below to view the complete document of changes

Oracle EPMA Documentation




What’s New in Hyperion Essbase 11.1.2?

Working with people new to Essbase every three to six months, I am always looking for ways to show users their hierarchies effectively. Many of them don’t have access to Essbase administration services or EPMA.  So, I always fall back to excel as a distribution method, as well as documentation, to show hierarchies.

Expanding hierarchies to all descendants is a great way to show small hierarchies, but, I am always asked to make it a collapsible hierarchy using the Excel grouping feature. The challenge of doing this manually to a hierarchy with thousands of members is that it is extremely time consuming and very error prone.

The following script can be added to any workbook to automate this effort.

Sub CreateOutline()
    Dim cell As Range
    Dim iCount As Integer
    For Each cell In Selection
        'Check the number of spaces in front of the member name 
        'and divide by 5 (one level)
        iCount = (Len(cell.Value) - Len(Trim(cell.Value))) / 5
        'Only execute if the row is indented
        If iCount <> 0 Then cell.EntireRow.OutlineLevel = iCount
    Next cell
    MsgBox "Completed"
End Sub

Setup

First, this sub routine has to be added to a workbook.  Open up the visual basic editor. Right click on the workbook in the project explorer window and add a new module. Paste the code above in the new module.  The editor is in different places in different version.  In Excel 2007 and 2010, the Developer ribbon is not visible by default.  To make it visible, go to the navigator wheel and click Excel Options.  There is a checkbox named Show Developer Ribbon that will make this developer ribbon viewable.

How To Use

First, open the member selection option in the Essbase add-in or smart view and select the parent.  Add all its descendants.  Alternately, change the drill type to all descendants and zoom in on the member of the hierarchy.

Retrieve, or refresh, the data, and make sure the indent is set so the children are indented.  Now, highlight the range of cells that has the hierarchy/dimension that the grouping should be applied. This should include cells in one column of the worksheet.  Open the code editor and place the cursor inside the sub routine you added from above and click the green play triangle in the toolbar to execute the script.  When this is finished, go back to the worksheet with the hierarchy and it will have the hierarchy grouped.

Excel limits the level of groupings to eight. If the hierarchy has more than eight levels, they will be ignored. Now, the hierarchy can be expanded and collapsed for viewing.

Shortcut keys or toolbar buttons can be assigned to execute this function if it is used frequently. If you are interested in doing this, there are a plethora of how-to articles on this topic.  This Google search will get you started if you choose to go down that path.

So, the next time you need to explain a hierarchy in Essbase, or distribute it in a common format, hopefully this script will help.




What’s New in Hyperion Planning 11.1.2?

What’s New in Hyperion 11.1.2?

Shared Services

 

As you’ve no doubt noticed by now, this has turned into a series of posts involving new features in the 11.1.2 release of the Hyperion products. This post will cover some of the significant changes to Shared Services, including improvements to Security Administration, Lifecycle Management, and Taskflows.

Security Administration

It’s been well-documented at this point that there have been multiple issues with the OpenLDAP approach to the Native Directory. In 11.1.2, the OpenLDAP has been replaced with a relational database as the storage point for native accounts and provisioning. This has already proven beneficial, as it allows for the next improvement below.
There is no longer a need for Essbase synchronization for users, as it is now done automatically. This is a welcome change from most, as it was always very easy to forget to refresh security. However, group synchronization must still be done manually.
The supported SSL configurations have also seen significant improvements. These include:
  1. SSL Offloading
  2. 2-way SSL deployment
  3. SSL termination at the web server
Oracle Single Sign-On (OSSO) is also supported in this release. The Oracle Internet Directory (OID) is used to provide SSO access to web applications.

Lifecycle Management (LCM)

Like the rest of Shared Services, LCM has adopted Oracle Diagnostics Logging (ODL) as the standard logging mechanism.
Perhaps the biggest improvement to LCM is that it now supports the extraction of data. Essbase data now appears as a selectable artifact when performing an export, and can be updated with the outline. On this note, I should probably point out that for cross-product migrations, LCM determines the correct order based on dependencies.
Some other modifications to LCM include:
  1. Additional information in migration status reports, including source and destination details.
  2. Users must be provisioned with the Shared Services Administrator role to work with the Deployment Metadata tool.
  3. The Calc Manager is supported, and has its own node under Foundation. As a result, business rules can now be migrated to classic HFM and Planning applications.

Shared Services Taskflow

This release has seen the addition of two new roles in Shared Services
  1. Manage Taskflows – This role allows users to create and edit a taskflow
  2. Run Taskflows – This role permits users to view and run a taskflow, but they cannot create or edit taskflows

 

Follow the link below to view the complete document of changes
 
Oracle Shared Services Documentation



Creating Hierarchies & Groupings In Excel – One Click Solution

A lot of users like to see hierarchies in Excel and build groupings around these hierarchies so they can be collapsed and expanded easily.  It is not a huge deal to do this for things that don’t change a lot, like months rolling to a quarter, but it can be extremely cumbersome to maintain for organizational or account hierarchies that are large or change frequently.

By adding some VBA code (a macro) to your workbook, managing groupings can be completely automated.  This can be customized for a plethora of different scenarios.  Below are 2 examples that Hyperion users will encounter.  One caveat to this is that Excel limits the number of grouping levels to 8.  If the worksheet has more than 8 levels, the following logic would not provide the expected result.

Creating a Hierarchy Based On Excel Indents

If a spreadsheet exists where the hierarchy is created with the indent (not multiple columns) feature of Excel, select the range for the groupings to be applied.  Execute the following script.  Basically, this loops through the cells you have selected and will create the groupings based on the number of indents in the cell.

Sub CreateGroupingsOnIndents()

Dim cell As Range
For Each cell In Selection
    If cell.IndentLevel <> 0 Then
        cell.EntireRow.OutlineLevel = cell.IndentLevel
    Else
        cell.EntireRow.ClearOutline
    End If
Next

End Sub

Creating a Hierarchy Based On SmartView/Excel Add-In Indents

When retrieving from Essbase, cells are indented by adding 5 spaces to the member name.  By getting the length of the cell, subtracting the number of spaces preceding the member name, and dividing the result by 5, the level of the indent is identified.  Select the cells with the member names and execute the following.

Sub CreateGroupingsOnSpaces()

Dim cell As Range
Dim iLength1 As Integer
Dim iLength2 As Integer
Dim iIndent As Integer

For Each cell In Selection
    iLength1 = Len(cell.Value)
    iLength2 = Len(LTrim(cell.Value))
    iIndent = (iLength1 - iLength2) / 5
    If iIndent <> 0 Then
        cell.EntireRow.OutlineLevel = iIndent
    Else
        cell.EntireRow.ClearOutline
    End If
Next

End Sub

Setup a Module

If you are unfamiliar with adding custom code to an Excel workbook, follow the steps below.

Excel 2000 and below

  1. Select Tools/Macro/Visual Basic Editor
  2. Right click on the workbook in the Project window, and select Insert/Module
  3. Expand the module folder and open the new module (likely module1)
  4. Paste the example above in this window to the right
  5. Execute it by clicking F5 or the green play triangle in the toolbar

Excel 2003 and greater

  1. Select the Navigation Wheel, and check the “Show Developer tab in the Ribbon” checkbox in the Popular tab
  2. Select the Developer Ribbon and click Visual Basic
  3. Right click on the workbook in the Project window, and select Insert/Module
  4. Expand the module folder and open the new module (likely module1)
  5. Paste the example above in this window to the right
  6. Execute it by clicking F5 or the green play triangle in the toolbar

These can also be associated to a custom menu or toolbar if you choose to take the extra step!




Learning Life Cycle Management (LCM): Command Line Security Synchronization

This purpose of this article is to introduce the command line Life Cycle Management(LCM) utility in Oracle EPM. The LCM tool can be used to export and import objects that can be found within the Oracle EPM Environment.   This includes Security, Essbase, Hyperion Planning, Financial Management … etc.  As once gets more familiar with LCM, one comes to realize how powerful the tool is and how empty life without LCM was. Without LCM some of the more detailed artifacts within an application were difficult to move between environments.  LCM provides a centralized mechanism for exporting and importing nearly all of the objects within an Oracle EPM application or module. The table below is listed to get an idea of all the facets of LCM.

 

Application Artifacts by Module

Module Artifacts
Shared Services User and Group Provisioning
Projects/Application Metadata
Essbase Files (.csc, .rpt, .otl, .rul)
Data
Filters
Partitions
Index and Page files (drive letters)
Application and Database properties
Security
EAS/Business Rules Rules
Locations
Sequences
Projects
Security
Hyperion Planning Forms
Dimensions
Application Properties
Security
Hyperion Financial Management Metadata
Data
Journals
Forms/Grids
Rules
Lists
Security
Financial Data Quality Management Maps
Security
Data
Metadata
Scripts
Security
Reporting and Analysis (Workspace) Reports
Files
Database Connections
Security

 

The LCM tool is integrated into the Shared Services Web Interface.  If can be found under the Application Groups tab. Within the application groups there are three main areas of interest:

  1. Foundation – includes Shared Services security such as Users/Groups and Provisioning.
  2. File System – This is where the exported files will go by default. The default location is to be stored server side, on the Shared Services server in the location: E:\Hyperion\common\import_export
    Under this main folder, the contents are broken out by the user account that performed the export. Within the export folder, there is an “info” folder and a “resource” folder. The info folder provides an xml listing of the artifacts contained within the export. The resource folder contains the actual objects that were exported.

    The LCM Command line tool provides more flexibility because it can be installed on any machine and the results can be directed to output to any local folder. Sometimes this is very useful if the Shared Services node is a Unix machine, and the LCM users are unfamiliar with Unix. Simply install the LCM Command Line Utility on the Windows machine and redirect its output to a local Windows folder using the –local command line option.

  3. Products and Applications – Each registered product will be listed and provide a mechanism to export and import the respective objects for the associated applications, Essbase, Planning…etc.

 

Going Command Line

The Shared Services LCM GUI is a great way to become familiar with the LCM tool. However, when it is time to start automating LCM tasks and debugging issues, the Command Line LCM utility is very helpful. To get started, the LCM Command Line tool requires a single command line argument, an xml file that contains the migration definition. The quickest way to obtain the xml file is to use the Shared Services LCM Web interface to select the objects you wish, select Define Migration to pull up the LCM Migration Wizard, and follow the prompts until the last step. Two options are presented, “Execute Migration” or “Save Migration Definition”. Choose “Save Migration Definition” to save the migration definition to a local file.

 

That is pretty much all there is to it… move the xml migration definition file to the location you have installed LCM. For instance, \Hyperion\common\utilities\LCM\9.5.0.0\bin, open a command line and run Utility.bat as indicated:

E:\Hyperion\common\utilities\LCM\9.5.0.0\bin>Utility.bat SampleExport.xml
Attempting to load Log Config File:../conf/log.xml
2011-03-20 11:50:49,015 INFO  - Executing package file - E:\Hyperion\common\util
ities\LCM\9.5.0.0\bin\SampleExport.xml
>>> Enter username - admin
>>> Enter Password----------
--2011-03-20 11:50:57,968 INFO  - Audit Client has been created for the server h
ttp://hyp13:58080/interop/Audit
2011-03-20 11:50:58,421 WARN  - Going to buffer response body of large or unknow
n size. Using getResponseBodyAsStream instead is recommended.
2011-03-20 11:51:03,421 INFO  - Audit Client has been created for the server htt
p://hyp13:58080/interop/Audit
2011-03-20 11:51:03,437 INFO  - MIGRATING ARTIFACTS FROM "Foundation/Shared Serv
ices" TO "/SampleExport"
2011-03-20 11:51:32,281 INFO  - Message after RemoteMigration execution - Succes
s. HSS log file is in - E:\Hyperion\common\utilities\LCM\9.5.0.0\logs\LCM_2011_0
3_20_11_50_48_0.log
2011-03-20 11:51:32,687 INFO  - Migration Status - Success

E:\Hyperion\common\utilities\LCM\9.5.0.0\bin>


LCM Example: Synchronizing Shared Services Security between Environments

LCM often requires moving objects and security between environments, such as from a development environment to a production environment. While LCM makes it easy, it is not as straightforward as simply running an export from one environment and importing into another environment. The reason is that LCM imports work in a “create/update” mode. In other words, the operations performed in LCM are typically additive in nature. While the typical LCM method would capture new users and new application provisioning, it will not handle removing user provisioning, removing or changing groups, or essentially removing users from the system. This can be an easy oversight, but it will ensure that the security becomes out of sync over time and can cause issues as well as security implications. At a high level, the steps to sync provisioning using LCM would be:

  1. Export Users/Groups/Provisioning from Source Environment
  2. Export Users/Groups from Target Environment
  3. Delete Using Step 2 Results the Users/Groups in Target Environment
  4. Import Users/Groups/Provisioning into Target Environment

Essentially, Step 1 and 4 are the typical import/export operations – where security is exported from one environment and imported into another environment. However, two additional steps are necessary. In Step 3, the users and groups in the target environment are deleted, removing provisioning too. This leaves an empty, clean environment to then import security, ensuring no residual artifacts remain in the environment. To use the LCM delete operation, a list of items to be deleted must be supplied. This is where Step 2 comes in, a simple export of the Users and Groups in the Target environment will provide the necessary information to provide to Step 3 – deleting the respective users and groups.

Below are some sample XML migration definitions for each step:

 

Step 1 – Export Users/Groups/Provisioning from Source Environment

Note: By default the results will be sent to the source Shared Services server in the “import_export” directory. You can use LCM to redirect the output to keep the results all in the same environment (the target system) by using the command line option [-local/-l] (run utility.bat without any command line options to see help for your version of LCM). Simply redirect the results into the local folder, \Hyperion\common\import_export, in the Target system.

<?xml version=”1.0” encoding="UTF-8"?>
<Package name="web-migration" description="Migrating Shared Services to File System ">
    <LOCALE>en_US</LOCALE>
    <Connections>
        <ConnectionInfo name="MyHSS-Connection1" type="HSS" description="Hyperion Shared Service connection" url="http://sourceSvr:58080/interop" user="" password=""/>
        <ConnectionInfo name="FileSystem-Connection1" type="FileSystem" description="File system connection" HSSConnection="MyHSS-Connection1" filePath="/Step1ExportFromSource"/>
        <ConnectionInfo name="AppConnection2" type="Application" product="HUB" project="Foundation" application="Shared Services" HSSConnection="MyHSS-Connection1" description="Source Application"/>
    </Connections>
    <Tasks>
        <Task seqID="1">
            <Source connection="AppConnection2">
                <Options>
                    <optionInfo name="userFilter" value="*"/>
                    <optionInfo name="groupFilter" value="*"/>
                    <optionInfo name="roleFilter" value="*"/>
                </Options>
                <Artifact recursive="false" parentPath="/Native Directory" pattern="Users"/>
                <Artifact recursive="true" parentPath="/Native Directory/Assigned Roles" pattern="*"/>
                <Artifact recursive="false" parentPath="/Native Directory" pattern="Groups"/>
            </Source>
            <Target connection="FileSystem-Connection1">
                <Options/>
            </Target>
        </Task>
    </Tasks>
</Package>

Step 2 – Export Users / Groups from Target Environment

<?xml version="1.0" encoding="UTF-8"?>
<Package name="web-migration" description="Migrating Shared Services to File System ">
    <LOCALE>en_US</LOCALE>
    <Connections>
        <ConnectionInfo name="MyHSS-Connection1" type="HSS" description="Hyperion Shared Service connection" url="http://targetSvr:58080/interop" user="" password=""/>
        <ConnectionInfo name="FileSystem-Connection1" type="FileSystem" description="File system connection" HSSConnection="MyHSS-Connection1" filePath="/Step2UsersGroupsTarget"/>
        <ConnectionInfo name="AppConnection2" type="Application" product="HUB" project="Foundation" application="Shared Services" HSSConnection="MyHSS-Connection1" description="Source Application"/>
    </Connections>
    <Tasks>
        <Task seqID="1">
            <Source connection="AppConnection2">
                <Options>
                    <optionInfo name="userFilter" value="*"/>
                    <optionInfo name="groupFilter" value="*"/>
                </Options>
                <Artifact recursive="false" parentPath="/Native Directory" pattern="Users"/>
                <Artifact recursive="false" parentPath="/Native Directory" pattern="Groups"/>
            </Source>
            <Target connection="FileSystem-Connection1">
                <Options/>
            </Target>
        </Task>
    </Tasks>
</Package>

Step 3 – Delete Users/Groups in Target Environment

<?xml version="1.0" encoding="UTF-8"?>
<Package name="web-migration" description="Migrating File System to Shared Services">
    <LOCALE>en_US</LOCALE>
    <Connections>
        <ConnectionInfo name="MyHSS-Connection1" type="HSS" description="Hyperion Shared Service connection" url="http://targetSvr:58080/interop" user="" password=""/>
        <ConnectionInfo name="AppConnection1" type="Application" product="HUB" description="Destination Application" HSSConnection="MyHSS-Connection1" project="Foundation" application="Shared Services"/>
        <ConnectionInfo name="FileSystem-Connection2" type="FileSystem" HSSConnection="MyHSS-Connection1" filePath="/Step2UsersGroupsTarget" description="Source Application"/>
    </Connections>
    <Tasks>
        <Task seqID="1">
            <Source connection="FileSystem-Connection2">
                <Options/>
                <Artifact recursive="false" parentPath="/Native Directory" pattern="Users"/>
                <Artifact recursive="false" parentPath="/Native Directory" pattern="Groups"/>
            </Source>
            <Target connection="AppConnection1">
                <Options>
                    <optionInfo name="operation" value="delete"/>
                    <optionInfo name="maxerrors" value="100"/>
                </Options>
            </Target>
        </Task>
    </Tasks>
</Package>

Step 4 – Import Users and Groups into Clean Target Environment

This step assumes that Step 1 was redirected onto the target environment within the import_export directory. The respective folder, Step1UsersGroupsSource, can also be manually copied from the source to the target environment without using the redirection to a local folder technique.

<?xml version="1.0" encoding="UTF-8"?>
<Package name="web-migration" description="Migrating File System to Shared Services">
    <LOCALE>en_US</LOCALE>
    <Connections>
        <ConnectionInfo name="MyHSS-Connection1" type="HSS" description="Hyperion Shared Service connection" url="http://targetSvr:58080/interop" user="" password=""/>
        <ConnectionInfo name="AppConnection1" type="Application" product="HUB" description="Destination Application" HSSConnection="MyHSS-Connection1" project="Foundation" application="Shared Services"/>
        <ConnectionInfo name="FileSystem-Connection2" type="FileSystem" HSSConnection="MyHSS-Connection1" filePath="/Step1UsersGroupsSource" description="Source Application"/>
    </Connections>
    <Tasks>
        <Task seqID="1">
            <Source connection="FileSystem-Connection2">
                <Options/>
                <Artifact recursive="true" parentPath="/Native Directory" pattern="*"/>
            </Source>
            <Target connection="AppConnection1">
                <Options>
                    <optionInfo name="operation" value="create/update"/>
                    <optionInfo name="maxerrors" value="100"/>
                </Options>
            </Target>
        </Task>
    </Tasks>
</Package>

Troubleshooting with Command Line LCM

LCM can be a great tool when it works flawlessly. However, it can quickly become part of mission critical activities like promoting artifacts from development to production. Consequently, it is necessary to learn some troubleshooting skills to maintain business continuity using LCM.

  1. Review the output of the LCM operation. Usually it will provide some detail about the error that was received.
  2. Review the server side Shared_services_LCM.log in ORACLE_HOME\logs\SharedServices\SharedServices_LCM.log
  3. Turn on debugging for the command line LCM tool. Change the line “info” to “debug” in the files
    E:\Hyperion\common\utilities\LCM\9.5.0.0\conf in log.xml and hss-log.xml
    <param name=”Threshold” value=”info” />
  4. Use Google, the Oracle Knowledgebase to search for more information.
  5. Try only a subset of the initial objects. For instance, Essbase can export a number of objects, Outline, Calc Scripts, Rule Files, Report Scripts, Substation Variables, Location Aliases, and Security. Try one at a time to determine which part of the whole is failing.
  6. Restart the environment. LCM is an emerging technology and can sometimes just be in a bad state. I’ve seen countless LCM issues where bouncing the environment clears the issue up.
  7. Look for special characters that might be present in your data. LCM is a java tool and uses xml and text files to transmit data. There are instances where special characters can mess up the parsing.
  8. Look for patches – as mentioned previously, LCM is an emerging technology and is still somewhat buggy (especially older versions). Check release notes in patches for enhancements/bug fixes in LCM.



Hyperion Troubleshooting and Debugging Guide Part 2 of 2

This section will talk about how to dive into debugging critical issues with Oracle EPM.

Start a Problem Log

The most useful habit to develop during issue resolution is to start a detailed log about the issue. Some problems can take days or weeks to resolve and require trying hundreds of different prospective resolution attempts. It is easy for a “small” problem to become a long winded issue. Consequently, it is hard to foresee when the issue will resemble the analogous onion: keep peeling off layers and finding more and more to fix. If the problem log is created initially, all the important details can be captured. Additionally, it is much easier to bring others up to speed (management) and create support tickets when all of the information is documented. This log should include the error as the end user sees it, the error from any logs you are able to capture, screenshots, timestamps, and things that you have tried along with the results.

Reproduce the Issue

The first thing to find out is whether the issue is reproducible. It is very difficult to solve an issue that is not reproducible. Many errors are simply ‘glitches’ and may have been caused by a very improbable event, such as a database hiccup. For instance, a database problem propagates into the Oracle EPM system, forcing it into a bad state. Such a problem may never produce itself again. Consequently, an initial step toward resolution is to restart the Oracle EPM services to bring them back into a ‘known state’. If the problem is not immediately reproducible after the restart, go back to the problem log and record everything you can. This type of issue will need to be profiled over a period of time to try and discover patterns if it occurs again.

The Numerous Logs

Once it is discovered that the issue is not a simple glitch, it is time to start digging. As mentioned previously, the first place to track down the cause of an issue is in the logs. The logs come in various forms. Here is a general breakdown of the log types:

 General

Log Type Description
Windows Event Viewer This is helpful for general system related messages. Also some modules built on Windows Technology (DCOM) will log messages here. For example, Financial Management (HFM) and Financial Data Quality Management (FDQM).
Application Logs The application logs are actually generated by the Hyperion code itself. These often contain the most useful information.
Application Server Logs This type of log pertains to a Java based Web Application. Most of the Hyperion modules with a web based front end have Application Server Logs. The Application Server Logs run within the WebLogic, Tomcat, or WebSphere container.
Web  Server logs The web server controls the handoff of web requests between the Hyperion Modules. The best way to use this log is to look for error codes (404, 401… etc) in the web log and review the corresponding URL that was used to ensure it is correct. Sometimes it might be obvious that the URL in the web log has the wrong domain, points to the wrong server, or cannot resolve the context.

 

Start by reviewing the log for the product where the error is occurring. The Application Logs and Application Server Logs will be most useful at first. The goal is to find a useful error message that can be used in the next process to find a resolution to the problem.

Common Log Locations:

Unfortunately, the actual log locations change drastically between recent versions of Oracle/Hyperion products. As stated before, searching for *.log might be useful.

Example Application Server Logs:

Essbase Admin Services Svr2 /Oracle/Middleware/user_projects/domains/EPMSystem/servers/EssbaseAdminServices0/logs
Workspace Svr1 /Oracle/Middleware/user_projects/domains/EPMSystem/servers/FoundationServices0/logs
Financial Reporting Svr1 /Oracle/Middleware/user_projects/domains/EPMSystem/servers/FinancialReporting0/logs
Analytic Provider Services Svr2 /Oracle/Middleware/user_projects/domains/EPMSystem/servers/AnalyticProviderServices0/logs
Web Analysis Svr1 /Oracle/Middleware/user_projects/domains/EPMSystem/servers/WebAnalysis0/logs

 

Example Application Logs

Reporting and Analysis Core Svr3 /Oracle/Middleware/user_projects/epmsystem1/diagnostics/logs/ReportingAnalysis/

 

/Oracle/Middleware/user_projects/epmsystem1/diagnostics/logs/ReportingAnalysis/stdout_console_default.log

 

Essbase Svr4 /Oracle/Middleware/user_projects/epmsystem1/diagnostics/logs/essbase/

 

 

Sifting Through the Logs:

It helps to know which modules depend on each other in order quickly pick out the respective log files to analyze. The basic idea is to determine which products are interacting and to review each log in detail for messages. It is important to review the logs of the product not only during runtime (as it is happening), but also during startup. Sometimes the fastest way to cut out the fluff is to stop the services, move or delete all the existing logs and start the environment back up. This ensures any log messages are relevant to the issue. Alternatively, one has to sift through potentially large logs looking for timestamps to ensure relevance, which can be daunting.

Product Depends On
Shared Services Relational Database, MSAD/LDAP
Lifecycle Management (LCM) Shared Services, LCM Source/Target applications
Essbase Shared Services
Hyperion Planning Shared Services, Essbase, Business Rules, Relational Database per App
Business Rules Shared Services, Hyperion Planning, Essbase, Relational Database (single database)
Hyperion Financial Management Shared Services, Relational Database (single database), DCOM (Event Viewer)
Financial Data Quality Management (FDM) Shared Services, Relational Database per App, Adapters for Essbase, Planning, HFM…etc, DCOM (Event Viewer)
Strategic Finance Shared Services, Relational Database (optional)
Data Relationship Management Shared Services, Database Client, Adapters, DCOM (Event Viewer)

 

Found an Error Message!

After discovering the error message, the first thing to ask is does this message make any sense? Try to use it within the context of your problem to solve the issue. Often, it is necessary to use external resources to resolve the issue. Use resources like Google, the Oracle Support Knowledgebase, and the Oracle Forums to further research the issue. Most often there will be information regarding your issue.

Note: If possible do not searching using end user messages, i.e. what the user sees when encountering the error.  Rather, find a detailed message in the logs. The end user messages are usually very generalized and can provide misleading information because of the vast number of issues which might match the general error message.

If there is still a struggle to discover a useful error message, most of the Hyperion modules use a logging mechanism that can be changed into debug mode. The actual method will differ based on product, for instance, most modules use log4j and there is often a .properties file you can change the logging level from “ERROR” or “WARN” to “DEBUG”. For instance, to enable debugging in Hyperion Planning: log into a Planning application, go to Administration -> Manage Properties, Select the System tab, Add the property DEBUG_ENABLED with a value of True. After changing the logging level, the service will need to be restarted to reflect the changes. Turning on application debugging should provide more context clues around what the product is doing at the time and help pinpoint the error.

 

Nothing Found…

If these resources do not help, an Oracle Support Ticket may be required. Additionally, the Oracle Forums can be a good place to post a question.  When creating a support ticket and posting to a forum, please include as much information as possible. This is where the Problem Log will come in handy.

This is a good time to look for updates and patches to the product. Check for patches and updates on http://support.oracle.com. Read the release notes for anything matching your problem. Even if there is nothing coming up, some obscure errors can be solved by simply applying a patch. Not all bugs will be in the release notes for the patch. Oracle’s hpatch process is pretty straight forward, but older environments might take some time to apply the patch. Always read the entire release notes and installation instructions before applying a patch. Also, sometimes patches are not as proven as the initial installers. This is because some patches may have just been release and only tested with a handful of clients. So ensure there is a good backup process in case the patch causes unintended problems. The Oracle hpatch process has a back out feature, but it is not always useful if the patch is half way installed and failed.

Finally, the last part of troubleshooting is intuition. As more problems are encountered and resolved, one can become more confident in resolving upcoming issues. There is no way to have encountered every issue and know the resolution, so the best that you can do is arm yourself with a good knowledge of the architecture, have a set of best practices, and lots of patience for problem solving.