Tag Archive for: Essbase

When I am introduced to business segments that use Hyperion Essbase, I always get asked the same question: “Can you explain what sparse and dense mean?”  Although I agree that users don’t HAVE to understand the concept, I contend that it is extremely valuable if they do.  It will not only help them become more efficient users, it goes a long way in helping them understand why something simple in Excel isn’t always simple in Essbase.  If users understand what a block is, and what it represents, they have a much better experience with Essbase.

If you are a relational database developer or a spreadsheet user, you tend to view data in 2 dimensions.  An X and Y axis is equivalent to the rows and columns in your spreadsheet or database table.  Essbase is a little different in that it stores data in 3 dimensions, like a Rubik’s Cube, so it has a Z axis.  Essbase databases refer to these “Rubik’s Cubes” as blocks.  An Essbase database isn’t one giant Rubik’s Cube; it could be millions of them.  The size and number of possible blocks a database has is determined by the sparse/dense configuration of the database.

An Essbase outline has a number of dimensions.  The number of dimensions can range in quantity and size, but each dimension is identified as a dense or sparse dimension.  The dense dimensions define how large each block will be in size (the number of rows, columns and the depth of the Z axis).  The sparse dimensions define the number of possible blocks the database may hold.  Assume the following scenario:  a database exists with 3 dense dimensions and 2 sparse dimensions.  The dense dimensions are as follows:

Net Income
Income
Expenses

Qtr 1
Jan
Feb
Mar

Version
~ Actual
~ Budget
~ Forecast

Remember, the dense dimensions define the size of blocks.  These dimensions would produce a block that looks like the image below.  Every block in the database would be the same.

For those more knowledgeable with Essbase design, this example assumes that no member is dynamically calculated or is tagged as a label to reduce complexity.

 

The sparse dimensions are below.

Total Product
Shirts
Pants

Total Region
North
South
East
West

The unique combinations of each sparse dimension has its own block.  There will be a block for Pants – North, one for Shirts – North, and so on.  Since there are 3 members in the Total Products dimension and 5 members in the Total Region dimension, there will be a total of 15 (3 x 5) blocks.  If a database has 5 sparse dimensions, all with 10 members, it would have a total possible number of blocks equal to 100,000 (10 x 10 x 10 x 10 x 10).  Below is a representation of the possible blocks for Shirts.

 

I started my career as an accountant and never had any aspirations of doing the same thing all day, every day.  While I struggled through what I considered monotonous job functions, I developed a knack for finding ways to automate my job.  As a result, I didn’t have to do repetitive tasks and I had more time to learn the business. Don’t get me wrong, accountants possess a unique set of skills and talent that I respect trumendously. It is a critical function of any business.  So, kudos to you accountants!

When I get involved with building new applications with Hyperion, or updating existing models, it pains me to see accounting, finance, and the staff who support Hyperion continue to perform repetitive tasks that dominate their time.  It can drive talented people to look for employment elsewhere.  It inflates salaries and jeopardizes credibility with an increase in human error. It also deteriorates the quality of business analysis, introducing a greater risk of poor decisions.  Inflated expenses and poor management decisions can be catastrophic to any business.

Automation in accounting and finance areas is critical to productivity.  Being able to support the constant push from management to become better and faster with less resources is always challenging.  If your Hyperion environment is supported outside of finance, IT areas are under just as much scrutiny.  How much of your time, or staff, is spent generating reports?  How much more time could be spent helping analyze the business and adding value to management decisions?  From an IT prospective, how much of your time is spent supporting the environment and responding to requests where answers could be automatically generated?  If 20% of your reparative tasks were eliminated, how much more effective you would you be?  How much more experience would you gain?  How much more marketable would you be both internally and externally?

Many of the possibilities for automation are never discussed.  Most people don’t even realize how much time they spend performing repetitive tasks that could be automated. Some think it would be impossible to automate and others think it would be too expensive.  The examples below were both accomplished in a matter of weeks.  The investment had a positive return within months.  The non-monitory gain was felt immediately.

Don’t think of why it can’t be done.  Think of a solution without constraints and ask, “How can we get there?”  With the proper guidance and background, massive improvements can be accomplished with minimal effort.

To spark some thought, think about these situations.

Monitoring Essbase jobs and keeping users informed of system status

Are you responsible for managing all the jobs that run on Essbase server(s) and are constantly asked if something has completed, or when something will complete, by your users?  Some organizations have a person dedicated to managing this information flow.

I implemented a solution at a large financial institution to conquer this problem.  The result was a solution that required zero effort to maintain and provided a summary of over 50 processes in one web page.  It gave the status of the process, when it last executed, if there were any errors, and a link to the log and error files if they were required.  Access was granted to all the Essbase administrators.  Another page was available for all users that displayed the status of the application, when it was last loaded, when it was last calculated, and several other useful sources of information.

The days of searching through folders on multiple servers are now long gone for system administrators.  Users are more informed and support tickets diminished substantially.  The estimated time savings was 4-6 hours per day.

This solution was built using existing technologies, limited to Maxl, Windows scripting, ASP.NET, and access to an IIS Server to host the website.  It was 100% maintenance free and built dynamically enough so that new applications could be added and applications could be renamed or deleted.  All this is possible without changing any code or processes.

Distribution of reports

A large international organization distributed over 150 reporting templates to an equal amount of people in the US and abroad.  These templates were distributed daily through the monthly close of business.  The daily adjustment cycle finished updating the reporting Essbase application around 2 AM.  When a finance staff member arrived around 8 AM, the work began.  The template was refreshed and saved for each of the 150 business entities.  Emails were then sent to each of the 150 people with their respective report.  This process took about 6 hours every day it was performed.

Using existing technology, a process was created to traverse through a spreadsheet that had 2 columns, which was maintained by finance.  The first was the business unit, followed by the email the report was to be sent to.  Using the Essbase toolkit and Excel, a process was initiated as soon as the database was updated that opened a spreadsheet that included the template, changed the business unit, refreshed the template, saved it, and emailed to the intended recipient.  This process took less than 1 hour and all the reports were distributed before 4 AM.  Customers received their reports earlier (those in Asia a day early), no human errors were made, and the finance staff now had an additional 6 hours to add value.

 

“Pre-Installation Requirements”

In installment #1 of this guide, we reviewed the architecture considerations and defined a simplistic architecture to use as a reference moving forward.  I recommend you read the previous post before you pick up this one.  I also recommend reading

Oracle Hyperion Enterprise Performance Management System Installation Start Here Release 11.1.1.2.pdf (128 pages)” from the Oracle Documentation Library.

To reiterate our general approach, the Hyperion architecture establishment and installation activities in our organization cover the following five areas.

  1. Defining an Architecture – Work with the client to define the hardware, software, and the distribution of Hyperion components
  2. Provide Pre-Installation Requirements – Provide the client with a detailed list of activities prior to the installation
  3. Installation – Running the installation and configuration utilities
  4. Validation – Perform all functional activities necessary to validate the environment readiness
  5. Documentation – Provide the client with all the details of the environment as it is configured.

In this post, I will go through step 2 that the Hyperion architect, should deliver.  Steps 3-5 will be available in the coming weeks.  For the sake of simplicity I will be using the example of a common installation, primarily Hyperion Planning, Hyperion Financial Management (HFM), and the core BI applications.

As part of any installation, some items need to take place before the Fusion Installer is started.  I like to create a checklist of things that need to be done.  Often times these things are out of my control and I will rely on Database Administrator, Network Administrators, and other System Administrators.  This checklist contains the following elements.

  • Web Application Server Specifications
  • Relational Repository Information
  • General System Administration
  • Network Information
  • Additional Components
  • DCOM Configuration
  • IIS and .NET Configuration

I’ll start with the Web Application Server Specification.  Once the web application server platform is chosen from the table below, the installation and configuration often falls on System Administrators.  Items such as clustering, system account management, and JVM setting are managed outside of the Hyperion installation.  Other times, I’ll get admin access and manage it myself.  The first item to do is to validate the application server is certified.  This is directly from Oracle Enterprise Performance Management System – Supported Platforms Matrices “Oracle Enterprise Performance Management System, Fusion Edition Release 11.1.1.2)” in the Oracle document library.  I recommend reviewing this document.  It can change from release to release.

 

Server Notes
Oracle Application Server 10g (10.1.3.3.x) a If Oracle Application Server is used as the Web application server, Oracle HTTP Server is also required.  Profitability and Cost Management supports only Oracle Application Server 10.1.3.x.
Oracle WebLogic Server 9.2 (MP1 minimum) / 9.2.xb Shared Services requires WebLogic Server patch CR283953” for all platforms. You can obtain this patch at the BEA web site.
IBM WebSphere 6.1.0.17 / 6.1.x C  

 

Embedded Java container d  

 

a Supports these editions: Java, Standard One, Standard & Enterprise. Includes support for Oracle Application Server Single Sign-On.

b WebLogic Express is supported for each supported version of WebLogic Server; non-base versions are supported only via manual deployment.

c WebSphere Express, ND, and XD Editions are supported for each supported version of WebSphere; ND and XD are supported only via manual deployment.

d For this release, Apache Tomcat 5.5.17 is the embedded Java container that is installed automatically on all platforms. Apache Tomcat is supported only in this capacity. If future EPM System releases embed different Java application servers, Apache Tomcat will no longer be supported. For deployments that require high availability or failover, Oracle recommends using a commercially supported Web application server that supports high availability and failover.

I request the URL and authentication information since this will be needed during the deployment.  If I am doing a manual deployment, I will request contact information from the web application server administrator and work in collaboration on the deployment.

The next item on my checklist is to get the Relational Repositories Information set up.  This is mostly straightforward.  In general, I like to create a tablespace/database for each component ((Hyperion Foundation, Essbase Admin Services / Business Rules, EPMA, Planning, Financial Management, etc).  A distinct tablespace/database for each component makes it easier to manage in my opinion.  Although it may not be strictly necessary, the documentation does not seem to be clear on the matter.  I say ‘better safe than sorry’.  For the installation and deployment, I’ll need credentials for each table.  Based upon some Q&A, I’ll make initial size recommendations.

The target installation servers have a General System Administration checklist containing the information that I’ll need to execute the installation.  This is made of the following items.

  • Operating Systems version/build
  • Account on each server to run the Hyperion services and account requirements
  • External Authentication information (MSAD, LDAP, or OID if applicable)
  • Drive/Volume information identified for installation of the Hyperion software.
  • DCOM and .NET account information if HFM or FDM are to be installed

Next, I identify the Network Information necessities for appropriate communication between servers.  This includes IP addresses, DNS information, validation of name resolution, trace between servers, subnet configuration, etc.  This is vital so the components can communicate via Fully Qualified Domain Name, Short Name, and IP address.  Some components use different variations of name resolution probably because the components were developed separately and have not been fully standardized.

In addition to the Hyperion Software, Web Application Servers, and Relational Repositories there are a few Additional Components that need to be installed.  A PDF writer is needed for Reports Server to render .pdf reports in Workspace.  This can be GhostScript or Acrobat distiller.  I suggest referring to the “Start Here” documentation to see what is currently supported but we often go with GhostScript due to its cost.

For the Windows Administration, we provide the DCOM Configuration information needed to support FDM, EPMA, and FDM.  This includes the DCOM account information, permissions, and authentication information.  Although this is spelled out in detail in the “Start Here” manual, I like to provide step-by-step information with screen shots since DCOM is often confusing…well it is to me at least.

The last thing we review is the IIS and .NET Configuration.  IIS is often not installed as part of a standard OS build.  We make sure this requirement is specified, ensuring .NET is installed, and validate it is the appropriate version.

As with any installation, I recommend the Installation Architect read, and re-read, the Hyperion Manuals on there own rather than relying on this information or intuition.  It can always change and your installation may have some caveats that I have not covered.  For our purposes, with all the above activities completed and validated, we should be ready to start laying out the binaries and start the Hyperion Installation.  We will review the Fusion Installer and Hyperion Configuration Utility in our next installment.

 

Comparing the current period to the prior period is relatively easy to accomplish in Essbase, and is often required when creating a Cash Flow hierarchy.  Assume the following scenario.

An outline exists that includes a Year and Time_Period dimension. The Year dimension includes 2007, 2008, and 2009.  The Time_Period dimension includes Full Year, Quarter 1 through 4, and all 12 months.  The dimension type for the Time_Period dimension has to be set to Time.  A dimension named COA (chart of accounts) holds the general ledger account structure. Below is an example of the Time_Period dimension

To enable a dynamic approach to solving this problem and minimizing the maintenance required as new years are added, an understanding of the following two functions is required.

@PRIOR
The PRIOR function provides a way to compare a member outside of the Time_Period dimension in multiple Time_Period members.  For example, accounts for July could be compared to June, or Quarter 2 could be compared to Quarter 1.  There are two parameters that this function accepts.  The first is the member to get the prior value for.  The second is the number of periods you want to shift the comparison.  If @PRIOR(“Asset123”,1) is used, it would provide the value for the period previous to what you had selected in the Time_Period member.  So, if June was selected, it would provide the value for May. If the formula was @PRIOR(“Asset123”,2) and June was selected, the result would be the value for April (2 periods back).  The function uses members at the same generation, so @PRIOR(“Asset123”,1) would provide the difference between Qtr2 and Qtr1 if Qtr2 was selected.

So, what happens if January or Qtr1 is selected?  There is no previous member for these.  This is where the second function comes in to play.

@MDSHIFT
MDSHIFT is similar to PRIOR, but it lets the calculation reference members across dimensions.  Where PRIOR only allows references on one dimension, MDSHIFT allows references to move across multiple dimensions.  If the user expects to see the different between Jan and Dec of the prior year or Qtr1 to Qtr4 of the prior year, MDSHIFT enables that to happen without hard coding the script.  Again, the goal is to have a script that doesn’t need to be maintained.

MDSHIFT accepts a set of parameters.  If your shift needs to occur along one dimension, it requires one set of parameters.  If your shift needs to occur along more than one dimension, it will accept multiple sets.  The function’s first parameter is the member you are evaluating, just like the PRIOR function.  The next set of parameters is what can exist multiple times if you are shifting along multiple dimensions.  The set consists of three parameters, of which the first two are required.  The first of the set is the number of positions to shift.  The second is the dimension to shift on.  The third is a range of member to use to shift along.  If this is left blank, Essbase uses level 0 members.

To get the prior value for Jan, or Dec of the previous year, it would be MDSHIFT(“Asset123″,-1,”Year”,,11,”Time_Period”,).  The first parameter is the member to evaluate.  The next two parameters are used to reference the previous member (-1) in the Year dimension.  Since the Year dimension members are level 0, the fourth parameter is not required.  The next series, or set, references Dec ( 11, or move forward 11 from Jan) of the Time_Period dimension.  The last parameter is not required since we only want to reference level 0 members again.

Putting it all together
If we put these two functions together with a basic if/then/else statement, we get a dynamic formula that won’t need to be updated as we progress through time.  It would look something like this:

If(@ISMBR(“Jan”))
  /* if Jan, then we have to compare Jan in the current year to Dec [shift 11] in the prior year
[shift -1]  */

“Asset123” – @MDSHIFT(“Asset123″,-1,”Year”,,11,”Time_Period”,);
ELSEIF(@ISMBR(“Qtr1”))
/* if Qtr1, then we have to compare Qtr1 in the current year to Qtr4 [shift 3] in the prior year
[shift -1]
The last parameter includes a range since we are not using level 0, which is the default  */

” Asset123″ – @MDSHIFT(“Asset123″,-1,”Year”,,3,”Time_Period”,(“Qtr1″,”Qtr2″,”Qtr3″,”Qtr4”));
ELSEIF(@ISMBR(“YearTotal”))
/* if Year, then we have to compare to last year [shift -1] */
” Asset123″ – @MDSHIFT(“Asset123″,-1,”Year”,);
ELSE
    /* all other members, which would include Feb through Dec */
” Asset123″ – @PRIOR(“Asset123”,1);
ENDIF;

 
Executing calculations that only run on blocks that have changed is a great feature in Essbase.  It enables administrators to calculate the database in a fraction of the time and is referred to as calculating “dirty” blocks, or an update calc.  This is awesome.  “Why shouldn’t I use it all the time?” you might ask.  Understanding how the Essbase calc engine works is critical to answering this question.

The Essbase calc engine calculates each block in a specific order (see figure 1).  The first block it calculates is the first level 0 block of the first sparse dimension.  It then traverses to higher levels and moves through the dimension from top to bottom until the entire dimension is consolidated.

When a level 0 block is changed, it and all of its parents, are tagged as dirty (it needs to be calculated again). When a calculation is executed on just dirty blocks, the process is the same except that it skips all the “clean” blocks.  Once the block is calculated the dirty tag is changed to clean.  So far, so good!

Revisit figure 1, which is a very simple example.  It shows a very simple hierarchy with the order in which the blocks are calculated, 1 through 10.

Figure 2 shows what happens if New York is updated.  Blocks 5, 6, and 10 are tagged as dirty.  The next calculation, if set to calculate only the dirty blocks, would only calculate blocks 5, 6, and 10, in that order.

Here is where things get a little ugly.  When an application has write access, as a planning or forecasting application would, it is very possible that users are updating data DURING the calculation process.  The timing of these events is critical to understand why calculating only dirty blocks can cause inconsistencies.

When a calculation has started, it identifies which blocks need calculated (5, 6, and 10 in this example).  Immediately after that, it starts calculating block 5.  If Texas is updated while block 5 is being calculated, what happens?

Figure 3 shows the state of the clean/dirty blocks when the calculation is finished with block 5.  It is exactly what you might expect at this point.  Blocks 6 and 10 are still dirty.  The update of Texas caused Blocks 1, 3, and 10 to be tagged dirty.

This is the critical piece.  Keep in mind how the calculation engine works.  It will continue to calculate blocks 6 and 10.  Also note that the calculation running does NOT reevaluate what needs calculated.  It will not calculate blocks 1 and 3.

Figure 4 shows the state of the blocks after the calculation finishes.  Only blocks 1 and 3 are dirty at this point because 10 was included in the calculation.

When the next calculation is executed, the only blocks that are dirty are 1 and 3.  Can you see the problem now?  After blocks 1 and 3 are calculated, is block 10 accurate?  Does U.S. equal the total of South, East, and West?  Unfortunately, it does not.

One could argue that it will get updated the next time data is changed.  In a very simple example with 3 levels, this would probably correct itself rather quickly, if the problem happened at all.  In a more realistic example where a company has 10 or 20 levels in their organization dimension, the problem is likely to be a reoccurring problem and may not be corrected until a full calculation is executed.  In most situations, it is not acceptable to have a database where it consolidates correctly only some of the time without any warning that it is not accurate.  Reporting can be incorrect, and bad management decisions can result.

Using the dirty calc feature is a great tool to have in your arsenal.  It can save hours of processing time.  It can make you look like a genius.  Without understanding its pitfalls, it can be the source of countless wasted hours trying to figure out why a cube isn’t consolidating correctly.  A worst case scenario is when a cost center manager updated their budget, it never gets consolidated correctly, and the problem isn’t identified until it is too late.