Re: test`




Summarize The Essbase Data Error File

How many times have you been in a situation where you have to traverse through hundreds of lines and errors from an Essbase data load only to figure out that all the rejected records are caused by an issue with one member?  You load the file again and wham – another error file with issues you didn’t see the first time.

Although this is typically less of an issue in a production environment, these situations are very likely in the development and testing phases of a project.

In2Hyperion is introducing another free tool that will navigate through errors and summarize the reasons for the rejects.  If 1,000 errors occurred because of one member, the feedback provided will show one line.

Hopefully our community will be able to use this utility to save themselves time and frustration.  It’s a free download!  For more information about the license, requirements, and installation, read Show Unique Essbase Error utility page.




Blind Men & Elephants: Part 1

A Primer for Master Data Management in Finance Organizations
Part I

Overview

You can only improve what you can measure.  That popular business maxim, like many others today, is highly dependent upon data.  What’s more it can’t be just any data.  Quality decisions require “Quality” data that is timely and reliable.  Moreover, executives must understand what should be measured, how those measurements are obtained and much more in order to correlate all the data necessary for accurate decision making.  After all, that’s the objective, right?  Making better decisions faster?

Unfortunately, in many organizations today managers struggle to address some basic building blocks. They often have as much success evaluating information as the proverbial blind men touching an elephant.

You know how the story goes:  the one holding onto the tail thinks he’s got a rope in his hands; the one standing next to a leg thinks it’s a tree; and the guy holding onto the snout is pretty sure the company is selling hoses. Focusing on a subset of available data without the bigger picture in mind can lead to faulty assumptions, poor decisions and inaccurate predictions.

This article provides a summary of the key aspects of Master Data Management (MDM), and clarifies its emerging practice and uses to focus on challenges faced by the Finance Organization of an enterprise.

The Challenge

In today’s constantly evolving enterprise, complex systems are designed to accommodate the needs and emphasis of the individual business units, thereby creating enterprise silos of data presented in different formats and often resulting in contradictory numbers.  The ability to measure corporate progress again KPIs requires adding even more complexity when the information accuracy is tested through repetitive checkpoints and validations, often using manual processes. And the more manual the process, the less likely it is that business rules are enforcing standardization insuring consistent usage. Decentralized approaches to data management typically impact the reliability of the data, leading to extended reporting cycles and decision making based on questionable or faulty data, ultimately impacting the company’s risk profile and bottom line.

Gartner predicts that a lack of information, processes and tools will result in more than 35% of the top 5,000 global companies failing to make insightful decisions about significant changes in their business and markets.

“Gartner Reveals Five Business Intelligence Predictions for 2009 and Beyond,” Gartner Inc., January 2009

This leads to an inherent misunderstanding of, and distrust in, the reports used by management to drive key business decisions. This problem is exacerbated by the need to comply with various regulatory standards, such as SOX, Basel II, Dodd-Frank and IFRS.  All too often executives sign reports and filings that are not completely accurate.

Is there a better way?

The past decade has seen the rise of new concepts, processes and tools to help the enterprise deal with the various challenges of dealing with data quality and information reliability.   In most organizations, the operational business systems rely on one or more sets of data including a Customer Master, an Item Master and an Account Master.  Product Masters are also prevalent in many industries.

By adopting a Master Data Management (MDM) strategy, you can create a unified view of such data across multiple sources. When you combine MDM methodology with strong analytical capabilities, you’re able to derive true value from islands of data.

MDM Defined

  • There is a lot of confusion around what master data is and how it is qualified. There are five common types of data in corporations:
    Unstructured—This is data found in e-mail, white papers like this, magazine articles, corporate intranet portals, product specifications, marketing collateral, and PDF files.
  • Transactional—This is data related to the operational systems such as sales, deliveries, invoices, trouble tickets, claims, and other monetary and non-monetary interactions.
  • Hierarchical—Hierarchical data stores the relationships between pieces of data. It may be stored as part of an accounting system or separately as descriptions of real-world relationships, such as company organizational structures or product lines. Hierarchical data is sometimes considered a super MDM domain, because it is critical to understanding and sometimes discovering the relationships between master data.
  • Master—Master data relates to the critical nouns of a business and falls generally into four groupings: people, things, places, and concepts. Further categorizations within those groupings are called subject areas, domain areas, or entity types. For example, within people, there are customer, employee, and salesperson. Within things, there are product, part, store, and asset. The requirements, life cycle, and CRUD cycle for a product in the Consumer Packaged Goods (CPG) sector is likely very different from those of the clothing industry. The granularity of domains is essentially determined by the magnitude of differences between the attributes of the entities within them.
  • Metadata—This is “data” about other data and may reside in a formal repository or in various other forms such as XML documents, report definitions, column descriptions in a database, log files, connections, and configuration files.

“The What, Why, and How of Master Data Management,” by Roger Wolter and Kirk Haselden,  Microsoft Corporation

Let’s be very clear. “MDM” is a comprehensive business strategy to build and maintain a single, dependable and accurate index of corporate data assets and is not just a tool.  It includes technology-assisted governance of the master data and interfaces with operational and analytical systems. So, MDM is not a technology application in and of itself. MDM is a set of business and governance processes all supported by a dedicated technology infrastructure. The technology exists to support the overall MDM environment, not the other way around. But in order to accomplish that, the first order of business must be to gain a complete understanding of how business processes work cross-functionally within the organization

The shifting landscape of Financial MDM

When is MDM not MDM?  When it’s Financial MDM.
“Myths of MDM,” Gartner, January 2011

Traditionally, MDM could be divided into two discrete worlds – Operational and Analytical. Andrew White, research vice president at Gartner, distinguishes the two.1 He notes that Operational MDM places an emphasis on process integrity and data quality “upstream” in core business applications.  Operational instances deal with sales regions, territories, products, etc. Traditionally, Finance MDM − mastering hierarchy and ledger/account data for use in “downstream” or reporting systems − has been equated to “Analytical MDM.” Here financial users conduct forward-looking analyses – what if an acquisition occurs, what factors impact my goods production, etc.

Operational MDM 

Any enterprise requires significant amounts of data to operate under current conditions and to plan for the future. Organizations gather petabytes of information regarding sales, customer service, manufacturing and more. A key component of Operational MDM is transactional data – time, place, price, payment method, discounts, etc. Operational MDM supports day-to-day activities of an organization, but can’t deliver insights to guide decision making.

Analytical MDM

An enterprise uses Analytical MDM to make overarching evaluations and forward-looking decisions.  Analytical MDM processes utilize information such as customer demographics and buying patterns. Large data warehouses enable comprehensive data aggregation and queries and applying Analytical MDM delivers insights critical to planning for the future. The value derived to the business is directly dependent on the quality of that operational data.

The trouble as White pointed out,  is that some applications used in financial organizations operate on transactions published from operational systems and actually behave like business applications.  The example given by White is a corporate reporting function that initially harmonizes disparate master data for global/corporate reporting that then must author its own versions (or views) of the same hierarchy and master data.  As such, this application is no longer purely “downstream” or “Analytical MDM” because the data now has to be governed much as other application- specific information is authored.  It gets even more confusing as the newly-authored hierarchy is shared and re-used across the organization and operational side of the business with each individual business unit spinning their own tale from the same set of data.  Trying to bring order to this environment creates the need to govern the new data as if it were re-usable master data, not application-specific data.
Financial organizations gain business value by applying and using MDM programs that incorporate both Operational and Analytical data. The cleansing of operational data gives decision makers a clear picture of current state. Programs that cleanly master operational data enrich analytical capabilities. Both are dependent upon the other.

And so a fresh perspective on the role of MDM in the Finance Organization should consider the blurring line between Operational MDM and Analytical MDM.  Financial MDM must encompass both models. To ensure an organization meets business demands, it must develop a sound strategy to proactively manage master data across operational and analytical systems.

The journey to reliable, quality financial data

There are technological, organizational, cultural, political, and procedural challenges involved in developing a Financial MDM program for any organization. Any of these can undermine the effort. Further complicating these projects are staff, including executives − maybe even the CFO − who have vested interests in ensuring their particular version of the truth prevails, regardless of the actual data.

A sound Financial MDM strategy should first consider the process, and then the supporting tools and technologies.  When built upon a firm foundation of process, technology takes its proper place as an enabler and a facilitator.  The three critical building blocks of a Financial MDM strategy are:

  • Data Governance. A set of processes that define how data is handled and controlled throughout the organization. These processes and procedures are in place to ensure all persons in the organization understand what the data assets are, how they are defined and maintained, and the methods to be used to affect changes to these artifacts.
  • Data Stewardship. A group of individuals who will oversee the data governance of the key data of the organization. The data stewards are ultimately tasked with ensuring the data elements are correct, unaffected by outside forces and maintained in accordance with the approved and understood procedures.
  • Data Quality.  A metadata management tool is a technological means to ensure the metadata elements are maintained in an orderly process and under a strict set of enforced business rules. It is important to understand that the technology is only a means to enforce the business policies and rules agreed to by the organization. A tool is not MDM in and of itself; but rather it is only one component of the solution. The tool selected should support the creation, modification, and validation of all data relationships and reporting structures for the entire enterprise.

Just keep in mind, all three components are required, and no one component is more important than the others.

The promise of Financial MDM

The journey toward a robust Financial MDM solution is worth the undertaking.  The benefits of implementing and using Financial MDM practices are numerous:

  • Companies adopting a Financial MDM strategy are able to increase productivity across business units by 30% to 50%.
  • Financial MDM strategies create operational efficiencies by eliminating duplicative and redundant processes.
  • Financial MDM strategies reduce risk by improving removing “hidden silos” and creating total visibility − “who is doing what” − as well as improving data quality and reliability that impacts regulatory compliance.

Examples of the benefits of Financial MDM

  • Banks and insurance companies find data and merger consolidation for regulatory reporting makes mergers and acquisitions a seamless and efficient process.
  • A major investment company slashed month-end reporting time by eliminating the manual processes required to manipulate data from over 20 spreadsheets.
  • A Defense contractor qualified for bulk-purchase national rates by consolidating divisional data to find duplicate purchase patterns.
  • A data services device provider reduced change control that had previously required 3 to 4 months to a matter of just days.

When you combine an MDM methodology with a strong analytical set of capabilities, it results in a strategic organizational infrastructure that provides the means to seamlessly derive true value by bridging the many islands of data. It becomes Financial MDM — a natural extension of business processes created by a company’s desire to achieve a competitive advantage by insuring data quality to unlock key performance indicators.

In the next installment:

Part II of this  White Paper will delve further into the two key aspects of Financial MDM:  Operational and Analytical Uses and Drivers.

FOOTNOTES:

  1. “When MDM isn’t MDM? In Finance of course, well sometimes…,” by Andrew White, Mar.  1, 2010, Gartner
  2. “Version of the Truth — Master Data Management”  The Big Fat Finance Blog by Alan Radding, Oct. 27, 2011



BUG REPORT – Shared Members Security in EPMA

Oracle has confirmed a bug related to the deployment of security with a planning application maintained in EPMA in version 11.1.2.x.  When the Shared Members checkbox is selected in an EPMA deployment of a Planning application, it ignores this option.  Even if the Shared Members box is checked, the user still only gets access to Ohio Region, and not the children, in the example below.   Oracle is currently working on a patch.

What Does Checking Shared Members Do?

By default, any member that is a shared member under a parent with security, it gets excluded.  For example, if the security for Ohio Region is set to @IDESCENDANTS with READ access, the three members below Ohio Region would have no access.
– Ohio Region
– Columbus (Shared)
– Cincinnati (Shared)
– Cleveland (Shared)

The filter that gets pushed to Essbase would look something like this.

@REMOVE(@IDESCENDANTS(“Ohio Region”),@SHARE(@IDESCENDANTS(“Ohio Region”)))

When the shared members are checked, it tells Hyperion that you want to include shared members in the security.  The same example above, with shared members selected, would give users access to all 3 members.  The filter that gets pushed to Essbase would then look like this.

@IDESCENDANTS(“Ohio Region”)

The Workaround

The workaround for this is to deploy the hierarchies from EPMA, and Refresh the database (security only) with Shared Members selected from Hyperion Planning.

When a patch is released, we will release the details.




Why is my database growing? It’s killing my calc times!

There are times when planning and forecasting databases grow for apparently no reason at all. The static data (YTD actuals) that is loaded hasn’t changed and the users say they aren’t doing anything different.

If you load budgets or forecasts to Essbase, you probably do what I’m about to tell you. If you are a systems administrator and have never seen how finance does a budget or forecast, this might be an education.

The culprit?  More data!

Budgets and forecasts are not always completed at the bottom of the hierarchy and rolled up. I don’t mean technically, as you might be thinking, Yes they do, they load to level 0 members and it gets consolidated up the outline. When it comes to budgets and forecasts, they are largely done in a top down approach. What this means is that finance is given a goal, or number, they have to hit, and they have to PUSH it down to lower business groups. The way a financial analyst creates a top-down budget, many times, is to allocate a value based on a metric, like headcount or sales.

Assume a budget for desktop support services is required. Let’s say management has mandated that the expense doesn’t grow from last year. Since this cost is to support the people in the business, the expense is divided by the expected headcount and allocated evenly. If a business unit has 20% of the people, that unit will get 20% of the expense. Since the expense to be allocated isn’t going to change, but the headcount will, the following will be the result:

Because the analyst doesn’t want to worry about missing any changes to the headcount forecast, he or she will create a data retrieve with headcount for every cost center, whether it has headcount or not. A lock and send sheet now takes the percentage of headcount each cost center has and multiplies that factor by the total expense. As headcount gets re-forecasted, this expense has to be reallocated. With this methodology, all the user has to do is retrieve the sheet with all the headcount forecast. The math does the allocation and the result is sent back to the database.

Easy, right?

This makes a ton of sense for an accurate forecast or budget with minimal effort. Not so fast, as this has two major flaws.

First, the volume of data loaded may be drastically higher than it needs to be. Assume the worksheet has 500 cost centers (500 rows). If half of these have no headcount, there are an additional 250 blocks created that hold zeros (assuming the cost center/organization hierarchy is sparse). This method, although very efficient for updating the numbers for the analyst when headcount changes, is causing the database to grow substantially. In this isolated example, there is twice as much data than is required.

Secondly, since the data has to be loaded at level 0, the analyst thinks loading at every cost center is a requirement. The materiality of the data at this level is often irrelevant. Let’s say that the analyst is really forecasting at the region, but loading data at the cost center because it is required to be loaded at level 0. Assume there are 10 regions in which these 500 cost centers exist. A forecast at the 250 cost centers that have headcount is not required. The forecast only needs to be loaded for 10 cost centers, one for each region. If this method were used, we would only create 10 blocks, rather than the 250, or 500 originally. When the system has hundreds of users, and thousands of accounts, you can see how the size of the database would grow substantially. This also provides no additional value and huge performance problems. In the example above, the number of blocks can be reduced from 500 to 10. It is far quicker to calculate 10 blocks than 500.

Even if the data needs to be at the cost center, many times the allocation is so small, the result of the allocation is pennies, or dollars. You would be hard-pressed to find a budget where a few dollars is material. In situations like this, the users have to ask themselves if the detail is worth the performance impact.

Users, Help Yourselves

Educate your users and co-workers on the impacts of performing these types of allocations. If loading data at every cost center is required, change your formula. Rather than calculating the expense as

=headcount / total headcount * Total Expense

add an IF statement so when the retrieve has no headcount, the calculation produces a #MI,

rather than a 0. This would be more efficient

=IF(headcount=0,”#MI”, headcount / total headcount * Total Expense)

If this is not necessary, change the way the data is loaded. Rather than picking all the cost centers, retrieve the headcount from the regions and build the send template to load to one cost center for each region.

The Real World

I worked for a large financial institution with a 100 Billion dollar budget. More than 70% of all the data was less than 10 dollars, and 30% was equal to zero! The budget was never looked at below region, which was 4 levels deep in an organization hierarchy that included more than 30,000 cost centers.

After consolidating the insignificant data and educating the users, the calc times decreased from 50 minutes to less than 5. All aspects of performance were better.

Easily Find Out How This is Impacting Your Application

There are a lot of ways to see if this phenomenon impacts your database. If the database is small, the export could be loaded to Excel. With some basic IF statements, the number of cells that were higher or lower than an identified threshold could be determined. Because I regularly work in a lot of different environments with large amounts of data, I wrote an application to traverse through an Essbase export to produce statistics on the data. The application is attached for download. Make sure you have the .NET libraries installed or this will not execute.  Version 3.5 or higher is required, and can be found by searching download .net framework.  There is a good chance it is already installed.

This is a simple application that I developed quickly to help me understand the degree to which a database is impacted by the example explained above. It will traverse through roughly 25,000 lines every second, and will provide the following metrics:

  • the number and percentage of values above a threshold entered
  • the number and percentage of values below a threshold entered
  • the number and percentage of values that are 0
  • the number and percentage of values that are #Missing, or Null
  • The number of lines in the export and the number of seconds it took to process

To use this, export the database at level 0 and choose column format. You will be prompted for the path and file name of the export, and the threshold to evaluate.

Download Essbase Export Analysis, and give it a try.




Essbase Add-in Ribbon, Version 2 Is Here

Thanks for all the great feedback on our Essbase Add-in Ribbon!  I have seen praise and thanks on the Oracle forums, network54, and a number of other popular hotspots.  I am constantly getting emails of gratitude.  Hundreds, if not thousands, of Hyperion customers are using the ribbon.  With the accolades, I am also getting some great suggestions for additional functionality.  In the spirit of giving back to the Hyperion community, I have every intention of implementing these requests.

What is new for the second release of the ribbon?  For those of you who used the Essbase Powerbar, you are aware of the option to save commonly used server connections.  I am happy to announce that it is now part of the In2Hyperion Essbase Add-in Ribbon feature set!

We moved the connection button that existed on the right, to the first button on the ribbon and renamed it Quick Connect.  From this menu button, users can select connect, add quick connection, or remove quick connection.  As connections are added, they will appear automatically in the Quick Connect menu.

The benefit of this option is that a user can select a “quick connection,” which remembers the server, application, database, username, and password.  Connecting to an Essbase application requires fewer clicks and less typing.  After quick connections are added, a file in My Documents named In2Hyperion.txt will exist.  This is where the connection information is stored.  The password is encrypted to ensure your information is not made available to other parties.

Download version 2.  To stay informed about future releases by signing up for our newsletter.  If you have any feedback, send us an email through the contact page.  Thanks again for all your support!




Financial Reporting with Rolling Years and Periods (Step 4 of 4)

Step 4: Adding ‘Advanced Suppression’ to each of the Year & Period columns.

Step 4 in the development of this report contains a majority of the logic to be setup which will allow a range of periods to be displayed to users. The idea behind the logic in this section is to move the range of periods displayed to users based on the Period selected in the User POV. The “Range Matrix” below will shed some light on what should be displayed based on what is selected.

Just as Conditional Suppression was setup for the trigger columns, Conditional Suppression will need setup for these Year/Period columns. The difference between the “Trigger” section and the “Year/Period” section resides on how columns are chosen to be suppressed. As the name suggests, the “Trigger” section added in steps 1 & 2 will drive the conditional logic, and thus the range of Periods displayed to users. The examples below display a high-level subset of the column logic.

Example 1:

  • User selects “Jan” as the Period.
    • Which Periods will be displayed?
    • Sep (Prior Year)
    • Oct (Prior Year)
    • Nov (Prior Year)
    • Dec (Prior Year)
    • Jan (Current Year)
    • Which Periods will be hidden (suppressed)?
    • Feb-Dec (Current Year)

 

Example 2:

  • User selects “Sep” as the Period.
    • Which Periods will be displayed?
    • May (Current Year)
    • Jun (Current Year)
    • Jul (Current Year)
    • Aug (Current Year)
    • Sep (Current Year)
    • Which Periods will be hidden (suppressed)?
    • Sep-Dec (Prior Year)
    • Jan-Apr (Current Year)
    • Oct-Dec (Current Year)

 

When adding columns to a report, each column will be tagged with an alphanumeric value that identifies the column number. Staying true to the rolling 5-month solution, columns “A” through “L” of your report identify the “Trigger” section (Jan equals “A”, Feb equals “B”… Dec equals “L”). The “Year & Period” section is identified by columns “M” through “AB” of your report (Sep of Prior Year equals “M”, Oct of prior year equals “N”… Dec of current year equals “AB”). When setting up the “Year & Period” Conditional Suppression, it is imperative that you know and understand which Periods correlate to which column numbers.

“Trigger” Section:

“Year & Period” Section:

The Conditional Suppression will need added to all “Year & Period” section columns (columns “M” through “AB” in the above images). Column “M” (which correlates to “Sep” of prior year) will need displayed to the user ONLY when the user selects “Jan” for the current POV of the Period dimension. By selecting “Jan”, the user is requesting to see data for Sep-Dec of the Prior Year, and Jan of the current year (as shown above in the “Range Matrix”). A subset of the Hyperion Reporting logic is shown in the image below. Similar logic is required for the remaining columns of the “Year & Period” section (columns “N” through “AB”) with the only difference being the suppressed “Trigger” columns selected.

Hyperion Reporting – Conditional Suppression Logic:

 

Year & Period Suppression Logic:

 

As stated before, the “Trigger” section of the report drives what is ultimately displayed to the user, and this is based on what the user selects in the User POV for Period. If a report requirement exists for something other than a 5-month rolling view, the number of “Year & Period” section columns would need adjusted, as would the Conditional Suppression logic, but the “Trigger” section will not need adjusted. The overall idea of how to implement this solution remains intact. Please feel free to contact me directly with any questions on implementing a solution such as this, I’m happy to assist when possible.

 




Financial Reporting with Rolling Years and Periods (Step 3 of 4)

Step 3:  Adding Year and Period columns.

The columns added here will be those which are displayed to the users. The Trigger section added in steps 1 and 2 above will determine which range of Periods will ultimately be displayed to the users. The key to adding columns in this section of the report is to include ALL possible Periods that could be displayed to the end-user. The Trigger section of the report will essentially move & display a subset range of Periods. For example, If a user selected “Jan” as their current Period, The report will need to display Sep-Dec of the prior year.

Keys:

  • These columns will be those displayed to the end-users.
  • These columns MUST be Data columns.
  • A rolling 5-month report will display Sep, Oct, Nov, Dec and Jan IF Jan is selected by the end-user.
  • Either ‘Substitution Variables’ or the ‘RelativeMember’ function can be utilized for the Year dimension (ie. CurrentYear, CurrentYear-1, etc.).
  • The Period members can be “hard-coded” into the report (don’t use the POV Period option here).

 




Hyperion Release 11 Architecture and Installation, Part 4 of 5

“Validation”

In installment #3 of this series we installed and configured the 11.1.x software.  In this installment we will discuss what Infrastructure Architect will do before the environment is turned over to the development or migration teams.

It is quite frustrating to the developers if the environment is not fully functional when they start using the system.  Additionally, it is very frustrating for the installation architect to have users in the environment as debugging of issues is occurring.  Each installation and configuration project plan should include at least a day or two to review an environment, restart it a few times, check out the logs, and then test the functionality of all installed components.  The number of items to validate depends on the products used and licensed by the client but it should start with the following and adjust as necessary.

  • Shared Services
  • Essbase
  • EAS and Business Rules
  • Planning
  • Financial Reporting
  • Web Analysis
  • Interactive Reporting
  • SQR
  • Workspace
  • Smart View and Provider Services
  • Financial Management
  • Financial Data Quality Management
  • Oracle Data Integrator
  • Data Relationship Management
  • Strategic Finance

The Installation Architect will test the use functionality of each of the installed product to ensure there are no errors.  This activity takes a combination of functional and technical ability.  The installation architect must know how the application works from the interface as well as understanding what any potential errors mean and how they may be corrected.  I’m not suggesting the infrastructure engineer know how to create a P&L report or design a Planning application, but the ability to navigate the user interfaces and test functionality eliminates the problems of encountering them after development has begun.

Early in my exposure to these applications, I’d spend a lot of time with a developer or functional user of the applications to show me how to test the system.  I’d ask them to tell me the first thing they try to do when they get a new environment.  It is always useful to know more about how the applications are used.

Some of the common problems that occur include the following.
EPMA dimension server does not resolve in Workspace
Shared Services doesn’t find users in Active Directory
Cannot create Planning Application
Cannot create FDM Database
ODI repositories are not available
Common Essbase commands do not work

The solutions to some of these problems may range anywhere from Database Access Permissions, Windows Security Rights, DCOM Settings, or incorrect Active Directory Setup.  Over the past few years working on dozens of installation, I’ve seen all of these.  From encountering many of these, the pre-installation requirements covered in installment #2 have been improved.  Some of these problems don’t arise until functionality is tested.  It’s important to test each installation and environment.  I’ve had situations where the development environment will test out fine and the QA Environment will have issues.  Each installation is usually different from each prior installation because of server settings, security policies, database settings, firewalls, or some other nuance.

If there are problems with the functionality there are a number of resources available to assist in troubleshooting.  I find the Oracle Technology Network Forum to be very useful.  I recommend anyone looking to work in this space, get an ID, and get involved.  You may also find some real useful things on blogs like this or a number of other very experienced bloggers.  There is a wealth of information at Oracle Support in the knowledge base.  In addition, if you have a support agreement with Oracle, register here and you can get support from Oracle.

Assuming everything is functioning as expected, the environment is turned over to the appropriate support person, or maybe support falls on the same individual that did the installation.  Either way, there is a lot of information that needs to be collected.  In the next installment, we’ll look at the information that should be compiled to capture the state of the environment as it was at the end of the installation as well as information that is useful to those that will be using the system.




Financial Reporting with Rolling Years and Periods (Step 2 of 4)

Step 2: Adding ‘Advanced Suppression’ to each of the 12 Trigger columns.

The Conditional Suppression set on each of these columns (see Step 1) will suppress the column that correlates to the Period selected. If the end-user selects Jan, then the column representing Jan will be suppressed. This is used later in step 4 of the report development.

Keys:

  • The Advanced (Conditional) Suppression for each column relates to the 12 Periods added in Step 1.
  • The logic for Jan is as follows:
    • Suppress Column If:
      • “Member Name” “Period” “equals” “Jan”.
      • “Jan” is the actual member name.
  • The same logic in place for Jan will be required for the Feb-Dec columns, Thus…
    • Suppress Column If:
      • “Member Name” “Period” “equals” “Feb”.
      • “Feb” is the actual member name.
      • Etc…
  • Once steps 1 & 2 are complete, development of the trigger section has been finished.