Tag Archive for: Hyperion

The ability to import font types into Hyperion Financial Reporting is a common request by many companies, typically a request resulting from corporate reporting standards. Not only is this possible, it’s a quick and easy exercise that is detailed out below.

Step 1: Locate the Font Folder.

The font type files (normally identified by a .TTF or .ttf extension) can be found in the “Fonts” folder located in your Windows directory (Likely on your ‘C’ drive). The key here is locating this folder on the server where Hyperion Financial Reporting has been installed.

Step 2: Copying& Pasting the New Font File.

This is as easy as it sounds… just copy and paste the new file into this directory.

Step 3: Creating a Report.

Opening Hyperion Financial Reporting and create/modify a report. When selecting the font type, notice the new Font Type that was just added in Step 2 above. Note that it’s best to close the Financial Reporting client before importing the new Font file onto the server; this will insure that the client will recognize the new file.

 

Reporting solutions often require companies to filter out a top range of Key Performance Indicators; for example, the top 10 expenses related to marketing. Hyperion Financial Reporting makes this type of reporting easy for developers by providing the “Top” properties checkbox. The difficulty arises when a company requires a solution to display the bottom 10 – those 10 expenses that account for a majority of marketing related expense. Hyperion Financial Reporting has nothing built to provide this type of information.

As you might expect – knowing your smaller expenses is important but knowing the largest; those where you can improve margin, is vital. A solution to display the bottom 10 is detailed below; this solution displays the 10 largest negative values vs. displaying the 10 largest positive values.

The high-level solution includes the following functionality:
a.    Inserting a “Rank” column.
b.    Sorting on the “Rank” column.
c.    Adding conditional suppression for bottom 10.

Step 1:

Create a report grid with a formula column as the first column (Column A below).

 

Step 2:

Insert the “Rank” function on the Formula Column. Be sure to choose the “Ascending” property. Adding “Rank” will order the rows from High-to-Low based on the data returned. The example below provides ranking off of Column ‘A’. The ranking is used on Step 4 when adding conditional suppression.

Step 3:

Apply row “Sort” to the grid. You find the “Sort” property by placing focus on the entire grid (left clicking the upper left-most cell). Choose to apply sorting to the “Rows”, Sort by “Column A”, and sort in “Ascending” order. Sorting will determine the order in which the data is displayed, Ascending or Descending. The Sorting is used on Step 4 when adding conditional suppression.

Step 4:

Add Conditional Suppression to the row(s). This logic will determine which data rows are ultimately displayed to the user. To add conditional suppression, highlight the row and click “Advanced Options”. Because the requirement is to show the bottom 10, suppression should hide any row with a “Rank” value greater than 10 (You will also want to suppress rows where “No Data” is returned).

When this report is run, only the bottom 10 will be displayed to the user… those marketing expenses with the largest negative values. The solution above will essentially do what a “Bottom” checkbox would have provided had Hyperion programmed this functionality into the application.

 

 

 

“Validation”

In installment #3 of this series we installed and configured the 11.1.x software.  In this installment we will discuss what Infrastructure Architect will do before the environment is turned over to the development or migration teams.

It is quite frustrating to the developers if the environment is not fully functional when they start using the system.  Additionally, it is very frustrating for the installation architect to have users in the environment as debugging of issues is occurring.  Each installation and configuration project plan should include at least a day or two to review an environment, restart it a few times, check out the logs, and then test the functionality of all installed components.  The number of items to validate depends on the products used and licensed by the client but it should start with the following and adjust as necessary.

  • Shared Services
  • Essbase
  • EAS and Business Rules
  • Planning
  • Financial Reporting
  • Web Analysis
  • Interactive Reporting
  • SQR
  • Workspace
  • Smart View and Provider Services
  • Financial Management
  • Financial Data Quality Management
  • Oracle Data Integrator
  • Data Relationship Management
  • Strategic Finance

The Installation Architect will test the use functionality of each of the installed product to ensure there are no errors.  This activity takes a combination of functional and technical ability.  The installation architect must know how the application works from the interface as well as understanding what any potential errors mean and how they may be corrected.  I’m not suggesting the infrastructure engineer know how to create a P&L report or design a Planning application, but the ability to navigate the user interfaces and test functionality eliminates the problems of encountering them after development has begun.

Early in my exposure to these applications, I’d spend a lot of time with a developer or functional user of the applications to show me how to test the system.  I’d ask them to tell me the first thing they try to do when they get a new environment.  It is always useful to know more about how the applications are used.

Some of the common problems that occur include the following.
EPMA dimension server does not resolve in Workspace
Shared Services doesn’t find users in Active Directory
Cannot create Planning Application
Cannot create FDM Database
ODI repositories are not available
Common Essbase commands do not work

The solutions to some of these problems may range anywhere from Database Access Permissions, Windows Security Rights, DCOM Settings, or incorrect Active Directory Setup.  Over the past few years working on dozens of installation, I’ve seen all of these.  From encountering many of these, the pre-installation requirements covered in installment #2 have been improved.  Some of these problems don’t arise until functionality is tested.  It’s important to test each installation and environment.  I’ve had situations where the development environment will test out fine and the QA Environment will have issues.  Each installation is usually different from each prior installation because of server settings, security policies, database settings, firewalls, or some other nuance.

If there are problems with the functionality there are a number of resources available to assist in troubleshooting.  I find the Oracle Technology Network Forum to be very useful.  I recommend anyone looking to work in this space, get an ID, and get involved.  You may also find some real useful things on blogs like this or a number of other very experienced bloggers.  There is a wealth of information at Oracle Support in the knowledge base.  In addition, if you have a support agreement with Oracle, register here and you can get support from Oracle.

Assuming everything is functioning as expected, the environment is turned over to the appropriate support person, or maybe support falls on the same individual that did the installation.  Either way, there is a lot of information that needs to be collected.  In the next installment, we’ll look at the information that should be compiled to capture the state of the environment as it was at the end of the installation as well as information that is useful to those that will be using the system.

 

As Enterprise Performance Management and Business Intelligence systems become adopted as the core decision support mechanisms within organizations, the need for transparent, fact-based decisions increases.  It not enough for these systems to provide the voluminous amount of data to the end user for analysis, but to tie data and decision inputs to the collaborative decisions that these systems support.

Although the organizational adoption of this style of decision making may face challenges, the technological groundwork is already in place.   Oracle’s addition of Annotation Service to Financial Reporting 11.1.x allows the capture of shared information against reporting objects and data.  This tool allows for threaded discussions and comments within EPM Workspace.

Let’s take a look at how users can utilize this tool against a Financial Report for a Planning application.

 

Many Hyperion Planning administrators are eager to customize the Planning application.  Questions are always posted on the Oracle Technology Network forums as to how and what is customizable.  As with any technology challenge, with the right resources and enough time, anything can be customized.  It is unrealistic to think that projects have an unlimited number of people and time to create a completely customized solution.  However, there are a number of things that are developed in Hyperion Planning to support user customization.

  • Planning includes templates that control the layout and content of PDF reports of data forms, data form definitions, task lists, and planning units.
  • Hyperlinks can be added to the Planning Tools page to support quick access to specific pages
  • The appearance of Planning can be customized by changing the appropriate style sheets, which are files that control the UI of the Planning application.
  • Templates can be changed to personalize text, colors, and images in the Planning interface
  • Workflow tasks can be changed so each state has a unique color.
  • Workflow states (Not Started, Approved, etc.) can be personalized so the state more accurately represents the business naming convention
  • Workflow actions (Start, Reject, etc.) can be personalized so the action more accurately represents the business naming convention
  • Custom spreading patterns can be created.

Future articles will be posted that will provide a step by step approach as to how each of these customizations is accomplished.  The Hyperion Planning Administrator’s Guide will give an overview of how each of these customizations is accomplished as well.

 

Backing up Essbase can be accomplished in a number of ways.  Some methods suit some organizational cultures better than others.  It is hard to argue that one method is better than another for this reason.  Below are two methods, and the pros and cons of each.

There are a number of factors that must be considered.  If the environment uses some of the new Hyperion tools, like EPMA, then one must allow consideration for the synchronization of the warehouse that holds the data for EPMA.  Where the different Hyperion applications (Shared Services, the web server, etc.) that work together reside is also a factor.

To minimize the complexity of this discussion, only information related to Essbase will be discussed.

Backup the entire server

Pros:  An image of the entire server is available in the case of disaster recovery and is normally in sync to that point in time of failure
Cons: Speed, cost, and data availability

Taking an image of the entire server is one option.  This will provide the most secure backup strategy.  If there is a hardware failure, getting back to the point of failure does not require a server rebuild.  This method is probably the quickest solution to restore all Essbase applications.  Price, speed, and data availability must be considered with this solution.  Taking an image of a server can be very time consuming and quite often, Essbase must be turned off for this to occur without skipping critical files.  Because a large amount of data is backed up, a large amount of storage is required. The time Essbase is down can have a significant impact on the people using Essbase.  There can be a very expensive price tag for the amount of tape and/or SAN that is required.  To effectively image a server without significant downtime, techniques like shadow copy or data mirroring are likely used.

Backup critical Essbase files

Pros: Speed, cost, data availability
Cons: Recovery time is sometimes longer, more effort if a complete system failure occurs, and data from the most recent backup to the point of failure is lost

The files required to be backed up to recover from a catastrophic event are actually very small in size.  The bulk of the amount of data related to Essbase is the pag and ind files, or the data and index files.  These files, in most environments, consume at least 90% of the total space.  If these are ignored during the backup process, the process can be much faster, far less expensive, and Essbase is not required to be off for the backup to occur.  Although this method can take longer to restore an entire server, it can be quicker to restore a few applications.  In most situations, a faster, cheaper solution, where the availability isn’t negatively impacted, is a far more palatable option.  This is only an option if you have either the data that sources the databases or data exports (input or level 0) of the Essbase databases.  If these are available, the databases can rebuild the pag and ind files.

Deciding on a backup method

Determining the best option boils down to cost and resources.  Taking an image of the server requires at least 2 times more disk space, a more complicated network/hardware infrastructure, and far more resources to build and store sufficient backup versions.  What is gained is an up to the minute backup.  If the cost associated with this method outweighs the cost of having to rebuild the data that was loaded between the time of failure and the last backup, then this solution is the best option.  In my opinion, it is hard to justify the investment in the capital required to support this for what little is gained.

First, disasters rarely happen.  With the RAID and SAN solutions today, disk failures that cause data loss are not the main reason a server fails, a hardware component failure is.  If the component that fails is replaced, the data doesn’t have to be restored.

Second, if a database becomes corrupt and unusable, a complete reload of the data is required.  Many times corruption can exist, unnoticed, in a database for weeks.  If the data is not available to reload, it is possible to lose weeks or months of data.

Third, if a disaster does occur, any data sourced from another system can be recreated.  Remember, the only data required is the data that has changes prior to the most recent backup, which is normally the previous night.  The data loaded by users, either through Hyperion Planning web forms or spreadsheets (Excel Add-In or SmartView), also exists somewhere else.  It might be frustrating for users to enter it again, but the data does exist and can be restored, normally with minimal effort.  In very large environments, this backup method can save millions of dollars.

Whether the decision is made to mirror the server, backup the critical Essbase files excluding the data consolidations and index files, or some method in the middle, it is wise to test the disaster recovery plan.  There is nothing worse than restoring from a backup only to find out that it is useless.

The second installment of this topic will be dedicated to how and what is required to have a secure DR plan if the pag and ind files are ignored in a backup strategy.

 

 

Fragmentation occurs naturally when a database is used frequently by adding, deleting, and modifying the data within it.  The more changes occur, the more fragmented the database gets as data becomes scattered through the pag files, and the size of the database becomes inflated.  The index files have to compensate for this, and what starts as a simple map becomes a spaghetti mess.

If you are unfamiliar with Essbase’s storage method, here is a brief overview.  Essbase has two sets of files related to the data stored in a database.  The numeric data is stored in files with an extension of pag.  Essbase also has files with an ind extension.  These index files are used to store the pointers to the data in the pag files.  As data is requested, Essbase must read the index files to know where the data is located in the pag files.

The result of a more fragmented database can have drastic effects on size and performance.  If you have a database where performance continues to decrease, fragmentation might be the source of the problem.  Performance degradation can occur over weeks or months, but can also occur much more frequently.  Databases with frequent data loads, or updates, can be impacted within a day.

A great way to identify the impact fragmentation is having with a database is to export your data (level 0 in most cases), reload it, and execute the process in question.  By exporting and reloading the data, fragmentation can be completely eliminated.

For more information about pag or ind files, please refer to the database administrator’s guide provided by Oracle.

 

Many people use Custom Lists in Excel – sometimes without even knowing.  If you have ever typed January into a cell and used autofill (click the dark plus sign, and drag across other cells) to create February through December, you have used Custom Lists.

Excel has a few Custom Lists setup for users when it is installed. Select the Tools / Options menu, and display the Custom Lists tab to view them.  Users can create their own Custom Lists in this dialog box by entering a list separated by commas or importing a range of cells that already includes a list.

For Essbase users who use the Hyperion Spreadsheet Add-In or SmartView, this can become a valuable tool.  Many times Essbase users will want to display a specific list of accounts, measures, products, etc.  Rather than selecting these from the member selection, or typing them, Custom Lists can be created and used to reduce the effort.

Let’s assume a user is responsible for a subset of the existing products and those products are only sold in a few of the markets.  The user may spend a lot of time creating the market list every time they create a new retrieve.  If the user creates a Custom List, they can automate this selection process.  A Custom List might include the following members.

Columbus,Cincinnati,Los Angeles,Tempe,Dallas,Austin,Seattle,Denver,Nashville

All the user has to do now is type Columbus in the first cell and use the autofill to list the rest of the markets.  This function can save those who frequently create add hoc reports a lot of time.

Custom Lists can be created for just about anything, are easy and quick to create, and are useful in a variety of situations.  www.In2Hyperion.com is not just for those in a technical capacity.  User related ideas, such as using Custom Lists, will become more prevalent on this site.  Sign up for our newsletter and receive notifications when more Excel tips for Essbase users become available.

 

Users of Essbase have some control over the performance of a database and how responsive it is when retrieving data.  With a basic understanding of how Essbase stores data, users can optimize performance by changing the order of the dimensions and members in a report.

It might be helpful to read our article on sparse and dense dimensions.  Here is a brief overview:

An Essbase database is comprised of thousands, if not millions or billions, of data blocks.  Each block of data, and its size, is defined by the dense dimensions in the Essbase outline.  The volume of blocks is dictated by the unique combinations of sparse dimension members.  If Time and Accounts are dense, each block created would hold all the months for every account.  If Organization and Product are sparse dimensions, there would be a block for each unique combination of Organization and Product.  A block would exist for Center 10 / Product A, as well as Total Organization / Total Product.  If the outline has 20 members in Organization and 15 members in Products, the database could have up to 300 independent blocks.

If a report is written to show an entire income statement for all 12 months for Total Product and Total Organization, how many blocks would have to be queried?  Remember, there is a block for each unique member combination of Organization and Product.  The answer is one, because there is a block for Total Organization/Total Product that includes every account and every member in the time dimension.

How many blocks would be accessed if a report pulled Total Sales (a member in the Accounts dimension) in January for every product?  Since the Product dimension is sparse and there are 15 products, 15 blocks would have to be opened to return the results.

Here is where your understanding of what sparse and dense represents will help you improve your reports.  Opening a data block, reading the contents, and closing it, is similar to opening, reading, and closing a spreadsheet.  It is much faster to open one spreadsheet, or block, than 15 spreadsheets.  So, if the database retrieves are written in such a way to minimize the number of blocks that need to be accessed, or the order in which they are accessed, performance can improve.

I will agree that if data for all 15 products is needed for the report, all 15 blocks have to be opened.  There is no way around that.  That said, often times users will build one worksheet for income statement and one worksheet for balance sheet.  This means that the report is making two passes on the same blocks.  In theory, it takes twice as long to open/read/close a data block 2 times than it does once.  It is faster to have the income statement and the balance sheet accounts in one worksheet, which only makes one pass on the required blocks.  One worksheet for Income Statement and one for Balance Sheet can be created with cell references to the worksheet that has the retrieved data, if 2 separate reports are required.

I frequently see another example of a report requiring multiple passes to the same data block.  Using our example dimensions above, assume product information is required in a report for multiple accounts.

    Jan Feb Mar
Income Product A      
Income Product B      
Income Product C      
Income Product D      
Expense Product A      
Expense Product B      
Expense Product C      
Expense Product D      

The Essbase retrieve above would start from the top of the spreadsheet and move down the rows to retrieve the data from Essbase.  This cycle would open the Product A block, then B, C, and D, and retrieve the associated income for each.  It would then have to reopen the same 4 blocks to access expenses.

The following example, again going from top to bottom, would access both income and expense while the block is open.  The way this retrieve is setup, it eliminates the need to access the same block multiple times, yet still pulls the required information.

    Jan Feb Mar
Income Product A      
Expense Product A      
Income Product B      
Expense Product B      
Income Product C      
Expense Product C      
Income Product D      
Expense Product D      

These examples are very small.  In a real world example, a report of this size would not produce significant variances in the time it takes to retrieve them.  Users often have spreadsheets that are hundreds of rows long and take minutes to retrieve.  In these situations, eliminating the need to access the same block multiple times can produce notable improvements in the time it takes to retrieve data from Essbase.

With a basic understanding of how your database is setup, users of Essbase can help themselves with some simple changes to the format of the retrieve worksheet.  If access to the dimension properties in your database is unavailable, ask your system administrator to supply them for you.

 

 

I started my career as an accountant and never had any aspirations of doing the same thing all day, every day.  While I struggled through what I considered monotonous job functions, I developed a knack for finding ways to automate my job.  As a result, I didn’t have to do repetitive tasks and I had more time to learn the business. Don’t get me wrong, accountants possess a unique set of skills and talent that I respect trumendously. It is a critical function of any business.  So, kudos to you accountants!

When I get involved with building new applications with Hyperion, or updating existing models, it pains me to see accounting, finance, and the staff who support Hyperion continue to perform repetitive tasks that dominate their time.  It can drive talented people to look for employment elsewhere.  It inflates salaries and jeopardizes credibility with an increase in human error. It also deteriorates the quality of business analysis, introducing a greater risk of poor decisions.  Inflated expenses and poor management decisions can be catastrophic to any business.

Automation in accounting and finance areas is critical to productivity.  Being able to support the constant push from management to become better and faster with less resources is always challenging.  If your Hyperion environment is supported outside of finance, IT areas are under just as much scrutiny.  How much of your time, or staff, is spent generating reports?  How much more time could be spent helping analyze the business and adding value to management decisions?  From an IT prospective, how much of your time is spent supporting the environment and responding to requests where answers could be automatically generated?  If 20% of your reparative tasks were eliminated, how much more effective you would you be?  How much more experience would you gain?  How much more marketable would you be both internally and externally?

Many of the possibilities for automation are never discussed.  Most people don’t even realize how much time they spend performing repetitive tasks that could be automated. Some think it would be impossible to automate and others think it would be too expensive.  The examples below were both accomplished in a matter of weeks.  The investment had a positive return within months.  The non-monitory gain was felt immediately.

Don’t think of why it can’t be done.  Think of a solution without constraints and ask, “How can we get there?”  With the proper guidance and background, massive improvements can be accomplished with minimal effort.

To spark some thought, think about these situations.

Monitoring Essbase jobs and keeping users informed of system status

Are you responsible for managing all the jobs that run on Essbase server(s) and are constantly asked if something has completed, or when something will complete, by your users?  Some organizations have a person dedicated to managing this information flow.

I implemented a solution at a large financial institution to conquer this problem.  The result was a solution that required zero effort to maintain and provided a summary of over 50 processes in one web page.  It gave the status of the process, when it last executed, if there were any errors, and a link to the log and error files if they were required.  Access was granted to all the Essbase administrators.  Another page was available for all users that displayed the status of the application, when it was last loaded, when it was last calculated, and several other useful sources of information.

The days of searching through folders on multiple servers are now long gone for system administrators.  Users are more informed and support tickets diminished substantially.  The estimated time savings was 4-6 hours per day.

This solution was built using existing technologies, limited to Maxl, Windows scripting, ASP.NET, and access to an IIS Server to host the website.  It was 100% maintenance free and built dynamically enough so that new applications could be added and applications could be renamed or deleted.  All this is possible without changing any code or processes.

Distribution of reports

A large international organization distributed over 150 reporting templates to an equal amount of people in the US and abroad.  These templates were distributed daily through the monthly close of business.  The daily adjustment cycle finished updating the reporting Essbase application around 2 AM.  When a finance staff member arrived around 8 AM, the work began.  The template was refreshed and saved for each of the 150 business entities.  Emails were then sent to each of the 150 people with their respective report.  This process took about 6 hours every day it was performed.

Using existing technology, a process was created to traverse through a spreadsheet that had 2 columns, which was maintained by finance.  The first was the business unit, followed by the email the report was to be sent to.  Using the Essbase toolkit and Excel, a process was initiated as soon as the database was updated that opened a spreadsheet that included the template, changed the business unit, refreshed the template, saved it, and emailed to the intended recipient.  This process took less than 1 hour and all the reports were distributed before 4 AM.  Customers received their reports earlier (those in Asia a day early), no human errors were made, and the finance staff now had an additional 6 hours to add value.