KScope13 -Have Your Cake and Eat it Too

I attended many Hyperion Solutions conferences back in the day.  I really enjoyed the experience, but after the first couple of years, I didn’t feel like I was getting my money’s worth.  I started to wonder if I knew all there was to know about Essbase, as every presentation I attended seemed very basic.  Was I that good?  Did the benefit of attending these conferences shift from knowledge gain to networking? I decided to stop attending. 

Last year, Jake Turrell invited me to become be part of the Kscope12 Hyperion Planning track selection committee.  Being away from the conference scene for so long, I jumped at the chance to find out how presentations were selected.  I thought it would be a great opportunity to be part of a team that was trying to make the conference content better.  I was extremely impressed with the topics, but was still hesitant on how valuable attending the conference would be for me, outside of networking.

Two words express what I experienced in San Antonio.

HUMBLING and EXCITED

I was never involved in the abstract selection process for the Hyperion Solutions conference, so I’m unable to compare the marketing behind the process of collecting and selecting abstracts, but I can say with conviction that every presentation I attended far exceeded my expectations.

This year, I was asked to take on the role of committee chair for the Planning track.  My goals were simple – to set measureable and meaningful evaluation standards that a democratic group of experts could use to make the best decisions on the presentations for Kscope13 and continue its presentation excellence.

To provide you a little background, presentations were ranked and selected according to the content, regardless of presenter’s industry exposure.  Once the presentation was paired to the presenter, we verified that

  1. No presenter dominated a track
  2. No consulting company dominated a track
  3. The consulting/customer ratio was reasonable
  4. A presentation that was known to be presented at a national or regional conference was excluded, unless there was an overwhelming reason for it to be presented again
  5. Presenters’ abilities were verified by interviews, or feedback by an industry contact that knew the presenter

There was great dialogue among the selection committee to select the abstracts that showed the most promise. Once the abstracts were ranked, a discussion surrounded each one on its merits and possibilities.  This discussion included the presenters’ backgrounds, the content and whether it was presented before.  The selection committee members were not immune to this degree of scrutiny either, as some of my own (the almighty track chairperson) topics were disregarded!

The bottom line is that the committee made every effort to showcase the best of the best, regardless of the presenter’s historical credentials and industry panache.

After the smoke cleared, the selected presentations were tweaked so no presenter or organization dominated the track. The presenters that were not known by a committee member were called and interviewed to ensure the most potential for a great presentation.

If you want to further your knowledge, improve your productivity, network with some of the best minds in our industry, and further your career, join us in New Orleans.  We think the result will be a conference well worth your time and investment.

You are sure to enjoy the experience.




Blind Men & Elephants: Part 1

A Primer for Master Data Management in Finance Organizations
Part I

Overview

You can only improve what you can measure.  That popular business maxim, like many others today, is highly dependent upon data.  What’s more it can’t be just any data.  Quality decisions require “Quality” data that is timely and reliable.  Moreover, executives must understand what should be measured, how those measurements are obtained and much more in order to correlate all the data necessary for accurate decision making.  After all, that’s the objective, right?  Making better decisions faster?

Unfortunately, in many organizations today managers struggle to address some basic building blocks. They often have as much success evaluating information as the proverbial blind men touching an elephant.

You know how the story goes:  the one holding onto the tail thinks he’s got a rope in his hands; the one standing next to a leg thinks it’s a tree; and the guy holding onto the snout is pretty sure the company is selling hoses. Focusing on a subset of available data without the bigger picture in mind can lead to faulty assumptions, poor decisions and inaccurate predictions.

This article provides a summary of the key aspects of Master Data Management (MDM), and clarifies its emerging practice and uses to focus on challenges faced by the Finance Organization of an enterprise.

The Challenge

In today’s constantly evolving enterprise, complex systems are designed to accommodate the needs and emphasis of the individual business units, thereby creating enterprise silos of data presented in different formats and often resulting in contradictory numbers.  The ability to measure corporate progress again KPIs requires adding even more complexity when the information accuracy is tested through repetitive checkpoints and validations, often using manual processes. And the more manual the process, the less likely it is that business rules are enforcing standardization insuring consistent usage. Decentralized approaches to data management typically impact the reliability of the data, leading to extended reporting cycles and decision making based on questionable or faulty data, ultimately impacting the company’s risk profile and bottom line.

Gartner predicts that a lack of information, processes and tools will result in more than 35% of the top 5,000 global companies failing to make insightful decisions about significant changes in their business and markets.

“Gartner Reveals Five Business Intelligence Predictions for 2009 and Beyond,” Gartner Inc., January 2009

This leads to an inherent misunderstanding of, and distrust in, the reports used by management to drive key business decisions. This problem is exacerbated by the need to comply with various regulatory standards, such as SOX, Basel II, Dodd-Frank and IFRS.  All too often executives sign reports and filings that are not completely accurate.

Is there a better way?

The past decade has seen the rise of new concepts, processes and tools to help the enterprise deal with the various challenges of dealing with data quality and information reliability.   In most organizations, the operational business systems rely on one or more sets of data including a Customer Master, an Item Master and an Account Master.  Product Masters are also prevalent in many industries.

By adopting a Master Data Management (MDM) strategy, you can create a unified view of such data across multiple sources. When you combine MDM methodology with strong analytical capabilities, you’re able to derive true value from islands of data.

MDM Defined

  • There is a lot of confusion around what master data is and how it is qualified. There are five common types of data in corporations:
    Unstructured—This is data found in e-mail, white papers like this, magazine articles, corporate intranet portals, product specifications, marketing collateral, and PDF files.
  • Transactional—This is data related to the operational systems such as sales, deliveries, invoices, trouble tickets, claims, and other monetary and non-monetary interactions.
  • Hierarchical—Hierarchical data stores the relationships between pieces of data. It may be stored as part of an accounting system or separately as descriptions of real-world relationships, such as company organizational structures or product lines. Hierarchical data is sometimes considered a super MDM domain, because it is critical to understanding and sometimes discovering the relationships between master data.
  • Master—Master data relates to the critical nouns of a business and falls generally into four groupings: people, things, places, and concepts. Further categorizations within those groupings are called subject areas, domain areas, or entity types. For example, within people, there are customer, employee, and salesperson. Within things, there are product, part, store, and asset. The requirements, life cycle, and CRUD cycle for a product in the Consumer Packaged Goods (CPG) sector is likely very different from those of the clothing industry. The granularity of domains is essentially determined by the magnitude of differences between the attributes of the entities within them.
  • Metadata—This is “data” about other data and may reside in a formal repository or in various other forms such as XML documents, report definitions, column descriptions in a database, log files, connections, and configuration files.

“The What, Why, and How of Master Data Management,” by Roger Wolter and Kirk Haselden,  Microsoft Corporation

Let’s be very clear. “MDM” is a comprehensive business strategy to build and maintain a single, dependable and accurate index of corporate data assets and is not just a tool.  It includes technology-assisted governance of the master data and interfaces with operational and analytical systems. So, MDM is not a technology application in and of itself. MDM is a set of business and governance processes all supported by a dedicated technology infrastructure. The technology exists to support the overall MDM environment, not the other way around. But in order to accomplish that, the first order of business must be to gain a complete understanding of how business processes work cross-functionally within the organization

The shifting landscape of Financial MDM

When is MDM not MDM?  When it’s Financial MDM.
“Myths of MDM,” Gartner, January 2011

Traditionally, MDM could be divided into two discrete worlds – Operational and Analytical. Andrew White, research vice president at Gartner, distinguishes the two.1 He notes that Operational MDM places an emphasis on process integrity and data quality “upstream” in core business applications.  Operational instances deal with sales regions, territories, products, etc. Traditionally, Finance MDM − mastering hierarchy and ledger/account data for use in “downstream” or reporting systems − has been equated to “Analytical MDM.” Here financial users conduct forward-looking analyses – what if an acquisition occurs, what factors impact my goods production, etc.

Operational MDM 

Any enterprise requires significant amounts of data to operate under current conditions and to plan for the future. Organizations gather petabytes of information regarding sales, customer service, manufacturing and more. A key component of Operational MDM is transactional data – time, place, price, payment method, discounts, etc. Operational MDM supports day-to-day activities of an organization, but can’t deliver insights to guide decision making.

Analytical MDM

An enterprise uses Analytical MDM to make overarching evaluations and forward-looking decisions.  Analytical MDM processes utilize information such as customer demographics and buying patterns. Large data warehouses enable comprehensive data aggregation and queries and applying Analytical MDM delivers insights critical to planning for the future. The value derived to the business is directly dependent on the quality of that operational data.

The trouble as White pointed out,  is that some applications used in financial organizations operate on transactions published from operational systems and actually behave like business applications.  The example given by White is a corporate reporting function that initially harmonizes disparate master data for global/corporate reporting that then must author its own versions (or views) of the same hierarchy and master data.  As such, this application is no longer purely “downstream” or “Analytical MDM” because the data now has to be governed much as other application- specific information is authored.  It gets even more confusing as the newly-authored hierarchy is shared and re-used across the organization and operational side of the business with each individual business unit spinning their own tale from the same set of data.  Trying to bring order to this environment creates the need to govern the new data as if it were re-usable master data, not application-specific data.
Financial organizations gain business value by applying and using MDM programs that incorporate both Operational and Analytical data. The cleansing of operational data gives decision makers a clear picture of current state. Programs that cleanly master operational data enrich analytical capabilities. Both are dependent upon the other.

And so a fresh perspective on the role of MDM in the Finance Organization should consider the blurring line between Operational MDM and Analytical MDM.  Financial MDM must encompass both models. To ensure an organization meets business demands, it must develop a sound strategy to proactively manage master data across operational and analytical systems.

The journey to reliable, quality financial data

There are technological, organizational, cultural, political, and procedural challenges involved in developing a Financial MDM program for any organization. Any of these can undermine the effort. Further complicating these projects are staff, including executives − maybe even the CFO − who have vested interests in ensuring their particular version of the truth prevails, regardless of the actual data.

A sound Financial MDM strategy should first consider the process, and then the supporting tools and technologies.  When built upon a firm foundation of process, technology takes its proper place as an enabler and a facilitator.  The three critical building blocks of a Financial MDM strategy are:

  • Data Governance. A set of processes that define how data is handled and controlled throughout the organization. These processes and procedures are in place to ensure all persons in the organization understand what the data assets are, how they are defined and maintained, and the methods to be used to affect changes to these artifacts.
  • Data Stewardship. A group of individuals who will oversee the data governance of the key data of the organization. The data stewards are ultimately tasked with ensuring the data elements are correct, unaffected by outside forces and maintained in accordance with the approved and understood procedures.
  • Data Quality.  A metadata management tool is a technological means to ensure the metadata elements are maintained in an orderly process and under a strict set of enforced business rules. It is important to understand that the technology is only a means to enforce the business policies and rules agreed to by the organization. A tool is not MDM in and of itself; but rather it is only one component of the solution. The tool selected should support the creation, modification, and validation of all data relationships and reporting structures for the entire enterprise.

Just keep in mind, all three components are required, and no one component is more important than the others.

The promise of Financial MDM

The journey toward a robust Financial MDM solution is worth the undertaking.  The benefits of implementing and using Financial MDM practices are numerous:

  • Companies adopting a Financial MDM strategy are able to increase productivity across business units by 30% to 50%.
  • Financial MDM strategies create operational efficiencies by eliminating duplicative and redundant processes.
  • Financial MDM strategies reduce risk by improving removing “hidden silos” and creating total visibility − “who is doing what” − as well as improving data quality and reliability that impacts regulatory compliance.

Examples of the benefits of Financial MDM

  • Banks and insurance companies find data and merger consolidation for regulatory reporting makes mergers and acquisitions a seamless and efficient process.
  • A major investment company slashed month-end reporting time by eliminating the manual processes required to manipulate data from over 20 spreadsheets.
  • A Defense contractor qualified for bulk-purchase national rates by consolidating divisional data to find duplicate purchase patterns.
  • A data services device provider reduced change control that had previously required 3 to 4 months to a matter of just days.

When you combine an MDM methodology with a strong analytical set of capabilities, it results in a strategic organizational infrastructure that provides the means to seamlessly derive true value by bridging the many islands of data. It becomes Financial MDM — a natural extension of business processes created by a company’s desire to achieve a competitive advantage by insuring data quality to unlock key performance indicators.

In the next installment:

Part II of this  White Paper will delve further into the two key aspects of Financial MDM:  Operational and Analytical Uses and Drivers.

FOOTNOTES:

  1. “When MDM isn’t MDM? In Finance of course, well sometimes…,” by Andrew White, Mar.  1, 2010, Gartner
  2. “Version of the Truth — Master Data Management”  The Big Fat Finance Blog by Alan Radding, Oct. 27, 2011



My Grid Rows Aren’t Aligned To My Data Rows

No, your eyes aren’t playing tricks on you. The grid lines don’t align with the row headers. It is very slight on smaller forms, but forms with hundreds of rows compounds the issue. The further down the grid, the more of an issue the offset is.

If you have ever seen this issue and pulled your hair out trying to figure out why it happens with some users and not others, fixing it is embarrassingly (now that I know) easy. Change the zoom in IE back to 100%!

Take a look at the bottom right area of IE.  If you see that the zoom is NOT 100%, select the View / Zoom / 100% menu.

Hopefully this will save you some time if you ever run accross this issue.




Change Application Maintenance Mode via Command Line

Patch Set Update: 11.1.2.1.600 offers a welcome utility

If you have ever tried to automate the state of a Hyperion Planning applications’ Application Maintenance Mode, you found it difficult. The only way to accomplish this was to run a SQL Update on the repository table, and for this to take effect, the Planning service had to be restarted.

If you are unfamiliar with the Application Maintenance Mode setting, it is found in Administration/Application/Settings menu. Changing this setting from All Users to Administrators, locks out planners from using the application. It is typically used when changes are made to hierarchies, web forms, system settings, security, and during deploys, to keep users out while changes are being introduced.

Patch Set 11.1.2.1.600, and the corresponding patch release for 11.1.2.2, introduces a new utility that allows administrators to change this setting from a command line. YEAH, it can now be automated without restarting Planning!

Without Further Adieu

MaintenanceMode.cmd (or MaintenanceMode.sh in UNIX) is found in the <EPM_PLANNING_INSTANCE> directory. The following parameters can be passed, separated by commas.

  • /A=app – Application name (required)
  • /U=user – Name of the administrator executing the utility (required)
  • /P=password – The administrator’s password (required)
  • /LL=loginLevel – [ALL_USERS|ADMINISTRATORS|OWNER]

ALL_USERS – All users can log on or continue working with the application.

ADMINISTRATORS – Only other administrators can log on. Other users are forced off and prevented from logging on until the parameter is reset to ALL_USERS.

OWNER – Only the application owner can log on. All other users are prevented from logging on. If they are currently logged on, they are forced off the system until the option is reset to All_USERS or ADMINISTRATORS. Only the application owner can restrict other administrators from using the application.

  • /DEBUG=[true|false] – Specify whether to run the utility in debug mode. The default is false. (optional)
  • /HELP=Y – View the utility syntax online (optional)

Example

MaintenanceMode.cmd /A=app1,/U=admin,/P=password,/LL=ADMINISTRATORS

MaintenanceMode.cmd /A=app1,/U=admin,/P=password,/LL=ALL_USERS

 




Kscope 13 Abstract Submissions Are Open

Kscope13 article submission is open.  I will be spearheading the Planning track at Kscope13.  Last year was my first trip to the conference, and I was amazed at the talent and breadth of speakers.  Jake Turrell did an unbelievable job owning the Planning track last year.  He was involved in the release of Developing Essbase Applications (available in hardback and Kindle versions).

I am going to lean on him for help to try to make this year’s event even better.  The most important part of the conference starts now, and we need your help.  Submit an abstract.  Only a brief description is required at this point.  It takes less than 30 minutes.  We don’t need a full presentation now; that comes later.  If you are nervous about speaking, we will have a number of helpful presentations on what to expect and how to make your presentation the most effective possible.  If you do it once, I promise you will be hooked.

What you will need to submit

  1. Personal bio
  2. Presentation title
  3. Presentation description
  4. Presentation summary no more than 100 words
  5. Benefits of attending your presentation

To submit, go to the Kscope13 Abstract Submission Site.  Creating an account takes only a minute.  Once you are logged in, input your bio and click the “Add Abstract” button.  Use the information above to fill out the form.  Select the related technology, the type of presentation, and you are done!  The Kcope13 Content page has suggestions that will help you get your submission to stand out.

Submissions are open now to October 15th. 




Website Updates Mean Improved Viewer Experience For You

We have introduced a few changes to the site, and hope they add additional value and ease of use.

First, we have upgraded the blog engine to the newest version to eliminate some issues the site is experiencing with newer browsers.  We have received feedback that some of the functions aren’t working as expected, and these changes should fix the issues viewers are having.

Secondly, the navigation bar on the right has been updated.  New articles will be assigned to experience level, which will help those in with different experiences find articles more applicable to them.  Our twitter feed will also be streamed near the bottom.  The plan is to tweet more information; things that don’t necessarily constitute an article, but is valuable information.  If you aren’t following us on twitter, you will find it valuable.  Expect bug announcements, tips, interesting finds in new versions, and the other topics that will be applicable.




BUG REPORT – Shared Members Security in EPMA

Oracle has confirmed a bug related to the deployment of security with a planning application maintained in EPMA in version 11.1.2.x.  When the Shared Members checkbox is selected in an EPMA deployment of a Planning application, it ignores this option.  Even if the Shared Members box is checked, the user still only gets access to Ohio Region, and not the children, in the example below.   Oracle is currently working on a patch.

What Does Checking Shared Members Do?

By default, any member that is a shared member under a parent with security, it gets excluded.  For example, if the security for Ohio Region is set to @IDESCENDANTS with READ access, the three members below Ohio Region would have no access.
– Ohio Region
– Columbus (Shared)
– Cincinnati (Shared)
– Cleveland (Shared)

The filter that gets pushed to Essbase would look something like this.

@REMOVE(@IDESCENDANTS(“Ohio Region”),@SHARE(@IDESCENDANTS(“Ohio Region”)))

When the shared members are checked, it tells Hyperion that you want to include shared members in the security.  The same example above, with shared members selected, would give users access to all 3 members.  The filter that gets pushed to Essbase would then look like this.

@IDESCENDANTS(“Ohio Region”)

The Workaround

The workaround for this is to deploy the hierarchies from EPMA, and Refresh the database (security only) with Shared Members selected from Hyperion Planning.

When a patch is released, we will release the details.




Article Now Available in InVision

Josh Forrest and I presented at last year’s Collaborate conference.  Along with that presentation, we wrote a white paper on the implementation of Hyperion Planning.  This paper included process of selecting a vendor, the project goals, requirements gathering, project methodology, and even the lessons learned once the implementation was completed.

The editors of OAUG asked us if they could publish the article in the summer edition, which was released this week.  The article was written closely with Abercrombie & Fitch and represents the process from the business, not from the consulting services, point of view.

The article can be downloaded here at www.oaug.org.  Access to the article requires free registration.




KScope 2012 Wrap-Up

Kscope was another fantastic event.  Kudos to those responsible for organizing it.  Thank you to all the speakers who volunteered their time and shared their knowledge.  The most frequent request Josh, Rob, and I had, was to make our presentations available.  They are available on the Kscope site, but many of you don’t have access.  So, we are happy to make them available here.

I also had tremendous feedback on the ribbon. At least half the participants in our sessions used it.  We got a couple of great recommendations as well.  With some luck (meaning my schedule slows down a little), I will be working on those in the near future.

Download Josh Forrest’s presentation on Hyperion Financial Reporting

Rob Donahue’s presentation on Hyperion infrastructure

Download Kyle Goodfriend’s presentation on Hyperion Planning




Meet XWRITE, XREF’s New Big Brother

The introduction of Hyperion 11.1.2 has some fantastic improvements.  Many of these have been long awaited.  The next few articles on In2Hyperion will describe some of the enhancements to Hyperion Planning, Hyperion Essbase, and Hyperion SmartView.

XREF Background

If you have been developing Planning applications, you are probably very familiar with the XREF function.  This function is used in business rules, calculation scripts, and member formulas.  It provides a method to move data from one plan type (Essbase database) to another plan type.  It is executed from the target database and pulls the data from the source.  XWRITE was actually introduced in later versions of 11.1.1.x, but is very stable in 11.1.2.x.  XWRITE is executed from the source and pushes data to the target.  This function is a huge improvement over XREF. 

XREF will copy data to a target database and must be executed from the target database.  The function pulls data rather than pushing it.  This causes two challenges.  Normally, the data is entered in the source database and is copied to the destination database.  When a Planning web form is saved, it can only execute a calculation on the database the web form is connected to (at least in older version – stay tuned).  This means an XREF function cannot be used when the form is saved.  The user has to go to another form, or execute a business rule manually, for the data to move.

The larger issue with XREF is accounting for block creation.  Remember, XREF pulls data from a source.  The destination may not have blocks that exist where the data will reside.  XREF does NOT account for the creation of the blocks if blocks don’t exist.  XREF must be used in conjunction with the CREATEBLOCKONEQUATION setting.  This is acceptable when fixing on very finite levels of data, but execution on larger amounts of data results in an extremely slow data movement process.  Essbase is responsible for the slow data movement process because it traverses all possible sparse member combinations to validate existence of data on the source.  Normally, data exists at a very small percentage of the possible blocks. In addition to the slow data movement process, it’s worth noting that the XREF function can also create blocks in your database which are unnecessary; ultimately increasing the size and decreasing the speed of your application.

Welcome to XWRITE

XWRITE is the opposite of XREF.  Rather than using XREF to pull the data from the target, XWRITE enables you to push data from the source.  Pushing data resolves the issues which XREF creates.

When XWRITE is executed from a web form, thus pushing data from the source to the target, there’s no longer a need to account for this process with two web forms or the manual execution of a business rule.

Since XWRITE is executed from the source, there’s no longer a need for looking at every possible sparse member combination on the target.  Using a FIX statement enables Essbase to decipher which blocks need to be copied, removing the guesswork and subsequently the requirement of CREATEBLOCKONEQUATION. Utilizating the XWRITE function results in faster processing and efficient block creation.

Prior to XWRITE, my preferred method of data movement involved exports from the source and imports to the target; thus eliminating the need for the XREF function.  The introduction of XWRITE has reduced the need for a data export/import process.