Drill Through to Data Management and Stay In Excel

There is a new, and often requested, option added to Smart View.  If you use drill through to Data Management (PBCS) or FDMEE (On Prem), download the most recent release of Smart View.  We, as users, now have the option to change where the result of our drill through queries is returned.  Users can either be asked where the result should be displayed, have it displayed in your browser, or (drum roll) have another tab created that holds the results in Excel.

Change Your Option

This isn’t a complicated or drawn out explanation, but it is sexy!  To change where the results are displayed, go to your Smart View ribbon and click Options to open the dialogue.  Select the Advanced tab on the left and scroll down just a tad.  You will see an option for Drill-Through Launch.  The 3 options previously mentioned are available.  This removes one of the biggest user frustrations regarding drill-through reporting and will surely make a lot of people happy.

The Proof Is In The Pudding

It works just as you would expect, but here is the proof.  When you open a retrieve and connect to the application, right click on a cell.  Click on Drill-through as you always have.

If the data has more detailed data available, a new worksheet will be created in your workbook with the results if you have chosen the In New Sheet option.

Not Much Else To Cover

That is it.  There isn’t much else to say, but this is a great and frequently requested feature.  We can finally provide a good answer.

Absolutely you can drill through to the detail and have it returned in Excel.

As always, post a comment if you have something to share with the community or have additional questions about this topic.




Use PowerShell to split large files by month/year for data loads into FDMEE on PBCS

If you are using PBCS, you may run into some challenges with large files being passed through FDMEE.  Whether performance is an issue or you just want to parse a file my month/year, this script might save you some time.

The Challenge

I recently had the need to break apart a file.  The source provided one large text file that included 2 years of data that was needed to populate the history of an employee metrics application.  The current process loaded files by month and we wanted to be able to piggy back off the existing scripts to load and process data in FDMEE and the monthly Planning data pushes to the ASO reporting cube.  So, the need break the data file into seperate files by month and year was required.  The file was delimited and formatted like the following.

Entity,Year,Scenario,Period,Account,Date,Employee,Pay Code,JobNumber,Data
BU1005,FY15,Actual,Feb,Pay Amount,02/02/2015,V1398950,P105,,108.10
BU1005,FY15,Actual,Feb,Pay Amount,02/03/2015,V1398950,P105,,108.92

The goal was to have a file for every unique month and year combination that included only the lines of the relevant time periods.  The header of the file also had to exist in each of the smaller files.  Since we were working on a Windows machine, we used PowerShell to script the solution.

Powershell Script Directions

The script is pretty simple to use and understand.  Update the script as follows.

  1. Create a new text file with a ps1 extension and paste the following into that file.
  2. Update the srcFile variable to point to the file to be parsed.
  3. Update the startYear to the first year in the file to be extracted.
  4. Update the currentYear variable to the last year in the file to be extracted.
  5. Update the ProcessName to a meaningful word or phrase that will be used to create the file name.
  6. Save the file and execute it like any other PowerShell script.

This will produce 12 files for each year with the header line and the data related to the month and year that represents the year and month in the file name.

Disclaimer

I welcome feedback on improving performance and will give credit to anybody that can improve on this.  I am NOT an expert in PowerShell and I am sure there are faster ways to accomplish this.  This created 12 files (1 year / 12 months) from a file that includes 7.8 million records and completed in 24 minutes.  So, this is pretty reasonable for one-off requests, but might need attention if it was a repeatable need.

This was developed using PowerShell 5 and some functions do not work in earlier adoptions of the software.

Powershell Script

#######################################################################
# Set the file to parse
# 
# Set the start year and end year
# 
# Change the counter if you want the files produced to start at
# something other than 1
#######################################################################
# Write a status to the screen to monitor progress
write-host "Processing started at $($(Get-Date).ToShortTimeString())"

# Update to point to the source file
$srcFile = "C:\Oracle\GCA\Data\Files\2015 Time Data\Time_DataPayAmount2015.csv" 

# Set to the first year you want to process
$startYear = 2015 
# Set to the last year you want to process
$currentYear = 2016 

# Used in the naming, is the starting number in name and increments by 1
$counter = 1 

# Get the first line (the header line) of the file 
$Header =  Get-Content $srcFile -First 1 

# Set the process name used in the file name 
$ProcessName = "Test Process" 

# Loop through each year in the range 
ForEach ($Years in $startYear..$currentYear )
 {
   # Loop through each month of the year
   ForEach ($months in 1..12 )
   {
     # Get the 3 month abbreviation of the month being processed
     $ShortMonth = (Get-Culture).DateTimeFormat.GetAbbreviatedMonthName($months)

     # Format year to FYxx (This is typically required on a Planning application)
     $FormattedYear = "FY" + 
     $Years.ToString().substring($Years.ToString().length - 2, 2)

     # Set the file name to a number starting with 1, the Month, and the year
     # Example: 01_ProcessName_Jan_2015.txt
     $FileOut = "{0:00}" -f $counter++ + "_" + $ProcessName + "_" + 
     $ShortMonth + "_" + $Years + '.txt'

     # Write out the header to the newly created file file
     $Header | out-file -filepath $FileOut -Encoding utf8  

     # Write out all the lines that match the month and year. The pattern 
     # includes a ".*" which is the equivalent of an AND conjunction, so 
     # the line has to include the processing year AND processing month 
     # for it to be included.
     select-string $srcFile -pattern "${FormattedYear}.*($($ShortMonth))" | 
     foreach {$_.Line} | out-file -filepath $FileOut -Encoding utf8 -Append

     # Write a status to the screen - this is not required but provides a level
     # of the current progress by communicating the Month/Year completed and the
     # time it completed  
     write-host $fileout "Completed at $($(Get-Date).ToShortTimeString())"
   }
 }

Conclusion

Hopefully this will benefit the community.  As I create more scripts like this, I plan to share them.




FDMEE: Loading Data to Different Plan Types

I’m currently working with a Planning application that has 2 data types with different dimensionality. This proved to cause some issues when I would try to import data via FDMEE. I would receive a validation error during the import phase for dimension UD4 (Customer dimension) which was valid for Plan Type 2 but not for Plan Type 1

For this specific case, I was trying to load data for Plan Type 1, which has a different set of dimensions from Plan Type 2. The customer & product dimensions are not valid for Plan Type 1. Note the settings from the dimension library for both dimensions.

Customer:

Product:

From the setup tab in FDMEE, click Target Application. We need to remove Customer & Product from the “Data Table Column Name” column:

Remove the values for Product & Customer and click save:

Then click refresh metadata:

After refreshing the metadata, go back to the Data Load Workbench and import the data file. The import & validate steps should complete successfully now:




FDM: Loading Data to Multiple Databases Within the Same Application

Although FDMEE is the data management tool of the future for Workspace, it is still lacking some of the basic functionality that can be utilized in FDM classic. One of these issues arose recently on my current project: How can we have 2 separate load rules in FDMEE, but have them each pointing to a separate database in the same application? The answer, it seems, is that you can’t. To begin, let me describe the issue in FDMEE in a little more in depth…

First, in the locations tab, notice that there are 2 separate locations for our planning app (DFPLAN2). IFS_Plan should be pointed to the FinPln database, while RevCOGS should be pointed to the RevCOGS database. Notice that they are both tagged to point to the application DFPLAN2, with no differentiation of databases:

 

The same issue arises in the location details for each location, as there is nowhere to discern between the two databases:

 

This brings up the issue of bringing in the wrong dimensionality  for each of the locations. RevCOGS includes the BusinessUnit dimension, even though its Essbase cube does not have that dimension:

 

And IFS_Plan includes Customer & Product, even though those should be ignored for this database:

 

Upon importing the data to the data load workbench, the data does not validate. The only output given is a couple of blank columns (which doesn’t provide much intuition to go off of). That leads us back to our issue at hand…How can we distinguish between the 2 plan types, so that we have the correct dimensionality for our data loads? The simplest solution that we found is to create another Target System Adapter. This is done via the FDM Workbench on the FDM server.

 Log on to the server and open the workbench. Once in the workbench you will see your Target system adapters:

 

To copy an adapter, right click and select “Copy…”. Name the adapter (here we have named ours Essbase2)

 

Expand on the adapter and expand on the dimensions folder. The activated dimensions will be black, while the non-active dimensions are greyed out. Notice that each database has a different set of activated dimensions, and how the user-defined dimensions (UD1, UD2, etc) are customized for both.

 

Since each database has different dimensionality, each adapter will have a unique set of activated dimensions. To edit which dimensions are active, right click on the desired dimension and select “Properties…”

 

In the properties screen, the name and alias of the dimension are customizable. Make sure that these match up to each database’s dimensionality in Essbase. Down below are 2 checkboxes. One activates the dimension, the other notes whether or not the dimension is a required field. Leave a checkmark next to the “Active” box for all dimensions in the database.

 

Next, right-click on the adapter itself, and select “Adapter Options”. From the dropdown, select “Essbase DB Name”. Here is where to input the relevant database name, so that FDM will know which it is pointing at during the import process:

 

Notice that we identified a different database for both of the Target system adapters (RevCOGS & FinPln):

Now that we have made that change on the FDM server, we will see those changes take affect when we look at the import formats for both RevCOGS and IFS_Plan in FDM Classic. UD2 (Source Custom2) is Product and UD3 (Source Custom3) is Customer. They are being picked up from column 1 and column 2 (as noted by the field number column below) of our load file, respectively:

For IFS_Plan, BusinessUnit is UD2 (Source Custom2) and is grabbed from column 5 in our data file:

To conclude, we were successfully able to distinguish between 2 databases in 1 application. Remember, this was only necessary because the databases had different dimensionality.  We were not able to do this in FDMEE, rather in FDM Classic, which means we cannot load more than 1 period at a time. That is the downside to this solution, but until Oracle includes the functionality in FDMEE, it seems to be our best option.