The FDIC’s New Financial Environment (NFE) Testing

June 2005
Report No. 05-019

AUDIT REPORT

FDIC OIG, Office of Audits

Background and Purpose of Audit


On December 10, 2001, the FDIC’s Board of Directors approved the purchase and implementation of a commercial-off-the-shelf solution to support an enterprise-wide financial environment for the FDIC. In October 2002, the FDIC contracted with Accenture LLP to assist the Corporation in replacing its financial systems with a PeopleSoft financials solution. Scheduled deployment of the New Financial Environment (NFE) core financial system was to originally occur on July 1, 2004. In June 2004, the Board approved the business case to re-baseline the NFE project with an additional cost of $18 million and to establish a revised deployment date for the core financial systems of mid 2005. The NFE Principals and the NFE Steering Committee are responsible for overseeing NFE project activities.

The audit objective was to determine the adequacy of the NFE test processes and the defect and change management processes in resolving problems identified during NFE testing. The report was prepared by KPMG LLP under a contract with the OIG to provide professional audit services.

FDIC, Federal Deposit Insurance Corporation


Results of Audit


The FDIC has developed a rigorous multi-stage test strategy and schedule for the New Financial Environment (NFE) to ensure it will function as designed and meet users’ needs. The FDIC reported that most test activities considered critical to final decisions had been completed before deployment of NFE core PeopleSoft financial modules on May 2, 2005. However, KPMG found that improvements were needed in the various testing phases of NFE, such as performing sufficient production simulation tests, providing evidence of verification for month- and year-end closings, ensuring adequate documentation for problem identification and resolution, and independently verifying the accuracy and completeness of tests performed for business processes. As a result, financial management system integrity and financial reporting risks may not have been mitigated to an acceptable level at the time KPMG completed its audit work.

We provided details of the findings as they were identified to the Division of Finance (DOF) and NFE project management team to facilitate timely corrective action and response where appropriate. Also, in order to facilitate corrective action, KPMG assigned a risk ranking for each condition found in system integration testing, quality assurance testing, and user acceptance testing based on defined risk management assessment criteria for the NFE project.

Recommendations and Management Response

We recommended that DOF and the NFE project management team review the risks identified and develop a risk resolution and action approach in accordance with the risk mitigation procedures outlined in the NFE risk management plan.

FDIC management concurred with the recommendation and provided a risk assessment matrix that summarizes the risk resolution and action approach for the conditions discussed in the report. Management also responded that sound management processes were instrumental in mitigating risks and its control framework afforded a high degree of confidence that a “go live” decision was appropriate under the circumstances. Management’s corrective actions effectively addressed our findings and recommendation, which is considered closed.



[ D ] V-Model V-Model of Multi-stage Testing
Source: DOF Corporate Applications Testing Strategy, Version 2, 3/15/2004.


FDIC OIG letterhead

DATE:  June 6, 2005

MEMORANDUM TO:  Fred S. Selby
 Director, Division of Finance

FROM: Russell A. Rau [Electronically produced version; original signed by Stephen M. Beard]
 Assistant Inspector General for Audits

SUBJECT:  The FDIC’s New Financial Environment (NFE) Testing
 Report No. 05-019

Enclosed is a copy of the subject report prepared by KPMG LLP under a contract with the Office of Inspector General (OIG). Please refer to the Executive Summary for the overall audit results. The firm’s report is presented as Part I of this document.

The report concludes that the FDIC had developed a rigorous multi-stage test strategy and schedule for the NFE to ensure it would function as designed and meet users’ needs. However, the report includes a recommendation that the Division of Finance and the NFE project management team review and develop a risk resolution approach for risks that may not have been mitigated to an acceptable level where aspects of testing needed improvement.

A summary and our evaluation of your response, the response in its entirety, and the status of the recommendation are contained in Part II of this report. The response adequately addressed the recommendation in the report. We consider the recommendation to be resolved, dispositioned, and closed as the agreed-upon corrective action has been implemented and determined to be effective.


TABLE OF CONTENTS

Part I:
Report by KPMG LLP
The FDIC’s New Financial Environment (NFE) Testing
Part II:
Corporation Comments and OIG Evaluation
Corporation Comments
Management Response to Recommendation


KPMG





FDIC’s New Financial Environment (NFE) Testing

Prepared for the
Federal Deposit Insurance Corporation
Office of Inspector General





Part I

Report by KPMG LLP



TABLE OF CONTENTS

I.    EXECUTIVE SUMMARY
Results of Audit
Recommendation
II.   BACKGROUND
Project Scope
Project Test Strategy and Results
Managing Test Activities
III.  DETAILED FINDINGS
Finding 1: Production Simulation Tests Needed
Finding 2: Effectiveness of Accounting Verifications for Month- and Year-End Closings
Finding 3: Defect Consolidation, Traceability, and Documentation
Finding 4: Effectiveness of Test Activities Performed
APPENDIX A: OBJECTIVE, SCOPE, AND METHODOLOGY
APPENDIX B: APPLICABLE STANDARDS AND GUIDANCE
APPENDIX C: NFE TEST STRATEGY AND PROCESSES
APPENDIX D: RISK ASSESSMENT APPROACH
APPENDIX E: ACRONYMS
TABLE:
Summary of Findings
FIGURES
Figure 1: V-Model
Figure 2: Risk Assessment Matrix



Acronyms

CCB Change Control Board
CCR Central Contractor Registry
CMMI Capability Maturity Model Integration
COTS Commercial-off-the-shelf
CQMS Configuration and Quality Management Staff
CTM Control Totals Module
DIT Division of Information Technology
DOA Division of Administration
DOF Division of Finance
DPS Dividend Processing System
DRR Division of Resolutions and Receiverships
ETV Electronic Travel Voucher
FDIC Federal Deposit Insurance Corporation
FDL FDIC’s Digital Library
FFMIA Federal Financial Management Improvement Act
GAO Government Accountability Office
JFMIP Joint Financial Management Improvement Program
NFE New Financial Environment
NIST National Institute of Standards and Technology
OERM Office of Enterprise Risk Management
OIG Office of Inspector General
OMB Office of Management and Budget
SIT Systems Integration Testing
SQT Systems Qualifications Testing
UAT User Acceptance Testing


I.   Executive Summary

The Federal Deposit Insurance Corporation (FDIC) Office of Inspector General (OIG) contracted with KPMG LLP to provide professional audit services. A task order issued under the contract called for KPMG to audit and report on the effectiveness of the FDIC’s New Financial Environment (NFE) system development test activities. This audit is one in a series of OIG audits of the FDIC’s system development initiatives on the NFE project.

The objective of the audit was to determine the adequacy of the NFE test processes and the defect and change management processes in resolving problems identified during testing. Our audit addressed Systems Integration Testing (SIT), User Acceptance Testing (UAT), and Configuration and Quality Management Staff (CQMS) activities, which are described in the Background section of this report. A detailed discussion of our audit objective, scope, and methodology is provided in Appendix A of this report.

KPMG evaluated test activities according to software verification and validation guidelines established by the National Institute of Standards and Technology (NIST), the Capability Maturity Model Integration (CMMI) for systems engineering, and Joint Financial Management Improvement Program (JFMIP) for government financial systems, which are discussed in Appendix B. Additional guidelines considered in this review include those published by the Government Accountability Office (GAO) and the Office of Management and Budget (OMB) related to the implementation of the Federal Financial Management Improvement Act (FFMIA). KPMG conducted its work in accordance with generally accepted government auditing standards from November 1, 2004 through March 8, 2005.

Results of Audit

The FDIC had developed a rigorous multi-stage test strategy and schedule for NFE to ensure it will function as designed and meet users’ needs. The FDIC reported considerable progress in completing most test activities considered critical to final decisions on the deployment of NFE core PeopleSoft financial modules scheduled for May 2, 2005. However, KPMG found that improvements were needed in SIT, UAT, and CQMS activities. As a result, financial management system integrity and financial reporting risks may not have been mitigated to an acceptable level at the time KPMG completed its audit work.

We provided details of these findings as they were identified to the Division of Finance (DOF) and NFE project management team to facilitate timely corrective action and response where appropriate. Each finding is summarized in the table on the next page for the test areas reviewed, and KPMG has assigned a risk ranking based on defined risk management assessment criteria for the NFE project.

Summary of Findings

 

 

Test Activity

Riska

Finding

Condition

Systems Integration Testing

Configuration Quality Management Staff

User Acceptance Testing

High

Medium

Low

1

Inadequate user training.

 

 

checkmark

checkmark

 

 

 

Inadequate chart of accountb tests.

 

 

checkmark

checkmark

 

 

 

Business processes were not tested sequentially from start to finish without interruption.

 

 

checkmark

checkmark

 

 

 

UAT did not include all production simulation testing.

 

 

checkmark

checkmark

 

 

2

Documented evidence did not exist for accounting-based verifications performed for month- and year-end closings during SIT. 

checkmark

 

 

 

checkmark

 

 

UAT month and year-end tests planned do not provide requirements for accounting verifications and reconciliations.

 

 

checkmark

checkmark

 

 

 

Scripts for identifying or researching un-posted transactions and posting errors were omitted.

checkmark

 

 

 

checkmark

 

 

CQMS did not perform an independent review of the month-end and year-end test processes.

 

checkmark

 

 

checkmark

 

 

Independent test activities excluded UAT validation activities.

 

 

 

 

checkmark

 

3

A centralized and controlled defect tracking system has not been established.

checkmark

 

checkmark

 

checkmark

 

4

UAT documentation is not effectively organized to independently verify the accuracy and completeness of tests performed for NFE business processes.

 

 

        checkmark

 

 

 

 

 

checkmark

 

Test activities and scenarios were missing from sampling test
scriptsc applicable to compliance requirements of the FFMIA.

 

 

checkmark

 

checkmark

 

 

There are no plans at this time for testing about 50-70 customized Financial Management System (FMS) reports currently generated by the Walker General Ledger.d

 

 

checkmark

 

checkmark

 

  1. See Appendix D for the risk management ranking criteria applied.
  2. The Chart of Accounts is a listing of all the accounts in the general ledger; each account is accompanied by a reference number. To set up a chart of accounts, the various accounts to be used by the business need to be defined.
  3. A test script is a test designed for a specific business process activity. Part of this design includes applicable scenarios representing methods for accomplishing a given activity.
  4. The current financial management system is referred to as the Walker Interactive system.

Recommendation

KPMG recognized that many of the issues identified and previously reported to the FDIC may have impacted NFE-scheduled implementation and deployment activities. Therefore, KPMG recommended that the Director, DOF, and the NFE project management team review the risks identified and develop a risk resolution and action approach in accordance with the risk mitigation procedures outlined in the NFE risk management plan.

II.   Background

On December 10, 2001, the FDIC’s Board of Directors approved the purchase and implementation of a commercial-off-the-shelf (COTS) solution to support an enterprise-wide, integrated financial environment for the FDIC. The decision was based on the need to modernize the FDIC’s complex and aging legacy financial system. The current financial management system, referred to as the Walker Interactive system, is characterized as a system with non-integrated components feeding into core financial management system (FMS) functions that requires significant reconciliation activity. Substantial manual processes and significant staff resources from the FDIC’s DOF and Divisions of Insurance and Research and Resolutions and Receiverships (DRR) are needed to achieve an unqualified opinion on the Corporation’s financial statements. Additionally, the current system’s functionality is limited and may preclude infrastructure upgrades.

The FDIC contracted with Accenture LLP (Accenture) in October 2002 to assist the Corporation in replacing its financial systems with a PeopleSoft financials solution, a COTS product. DOF and the Division of Information Technology (DIT) jointly managed the project. The NFE project involves less than a 5-percent customization of the PeopleSoft financial modules. The FDIC considers the re-engineering of its business practices to be a critical factor in achieving the expected benefits of the NFE in terms of streamlining business processes and avoiding the high-maintenance costs associated with software customization. The implementation of the core financial system was originally scheduled to occur on July 1, 2004. In June 2004, the Board approved the business case to re-baseline the NFE project with a revised implementation schedule and $18 million in additional funding to support the project costs associated with evaluation of the new system and changing business processes, renovation of legacy systems, new security and quality assurance mandates, and a contingency fund. Under the revised schedule, the core financial system was scheduled for implementation on May 2, 2005. The Budget Formulation/Receivership Service Billing/Enterprise Warehouse component and the cost management component were planned for implementation on July 1 and September 1, 2005, respectively.

The NFE Principals and the NFE Steering Committee are responsible for overseeing NFE project activities. The NFE Principals group is composed of the Chief Financial Officer (CFO) and the directors of the divisions most impacted by NFE implementation (DOF, DIT, DRR, and the Division of Administration (DOA)). The Steering Committee provides direct oversight and includes senior management representatives from DIT, DRR, DOA, the Division of Supervision and Consumer Protection (DSC), and the Office of Enterprise Risk Management (OERM). The NFE project management team is responsible for providing guidance and direction to all involved parties in these activities, including the FDIC test coordinators, legacy and core test managers, FDIC NFE team leads, Accenture test execution team, Accenture fix teams, and business area points of contact.

Project Scope

The current NFE project timeline for deployment is separated into three components. The first component calls for the deployment of core PeopleSoft financial modules in May 2005. This involves significant changes to FDIC business processes, including:

  • Creating a new accounting structure to collect and track the required financial and cost management data.
  • Converting vendor registration and maintenance to the federal Central Contractor Registry (CCR)
    System.[ * ]
  • Creating a central electronic repository for procurements and contracts.
  • Automating the procurement card system.
  • Increasing asset management functionality, including integration with purchase orders and payable vouchers.
  • Automating the capture of receipts and disbursement funds for more effective cash management.
  • Establishing automated workflow processes to simplify and streamline many paper-based processes.

Additionally, 25 systems will integrate into the NFE core modules primarily for the General Ledger, Accounts Payable, and Supplemental Payment System modules. The 25 systems include 23 legacy systems and 2 new systems related to employee time and attendance and legal information and case management. The NFE project management team had identified the Payroll Bridge System (the “translator” of payroll processing results from National Finance Center into the general ledger) and the Electronic Travel Voucher System (travel reimbursements) as two legacy systems for which it was critical that they both be operational at the same time that NFE is deployed.

The deployment of the second component, Budget Formulation/Receivership Service Billing/Enterprise Warehouse, is targeted for July 1, 2005. The third component, the Activity Based “Cost” Management module, is scheduled to be deployed by September 1, 2005.

Project Test Strategy and Results

Effective test, defect handling, and change management processes provide a means of mitigating significant system integrity issues that could impact a system’s future operational state. Test process activities should ensure that all aspects of the new system will function correctly, meet users’ needs, and work as intended in the system’s operational environment. Federal standards and FDIC policy require the performance of test verification and validation activities.

The FDIC had developed a rigorous multi-stage test strategy and schedule for NFE to ensure that the system will function as designed and meet users’ needs (see Appendix C for a detailed description of test processes). Key components of this test strategy critical to final decisions on NFE deployment include SIT and UAT. Additionally, the FDIC had established independent quality assurance testing for NFE that is performed by the FDIC’s CQMS.

  • SIT ensures that all business functions perform as designed on an end-to-end basis across the NFE applications and platforms. SIT verifies that the application modules interact correctly within PeopleSoft financial modules, including all interfaces that send or receive transactional data to/from the NFE. Guidelines for this testing are based on the DOF Corporate Applications SIT Approach, dated June 10, 2004, and the NFE SIT Test Plan for Interfaces, dated June 18, 2004.
  • UAT is the final round of NFE testing. The purpose of UAT is to secure the agreement of all business process owners that (1) the PeopleSoft modules, as modified and configured, and (2) the impacted FDIC legacy application interfaces meet the business owners’ current stated business requirements when used in conjunction with processes and procedures developed by the business owners and the NFE business planning team. To accomplish these objectives, a three-pass testing strategy was used; pass one began December 1, 2004, and pass three testing ended March 31, 2005. On February 4, 2005, the FDIC reported that 14 of 16 NFE Phase I core business process areas had entered into the first pass of UAT with 77 percent of all planned scripts successfully completed.

CQMS performed quality assurance test activities over a 3-week period starting on November 1, 2004. The scope of this review, as stated in the CQMS NFE Test Plan, dated September 22, 2004, was to verify the effectiveness of the SIT performed against NFE core financial modules and interfaces. Five of the 26 applications were tested on-line, and the rest of the applications were inspected. CQMS performed the five on-line tests on systems considered critical to NFE production, including the primary receivership and subsidiary financial reporting system, Dividend Processing System, corporate human resources systems, Electronic Travel Voucher processing system, and the FDIC Legal Division’s principal information system. For inspections, CQMS placed reliance on requirements functionality reviews that occurred for each application during SIT.

Managing Test Activities

The NFE project management team manages test activities using test plans and results documentation located in several data repositories. For example, NFE-related test documentation is located in the FDIC’s Digital Library (FDL) and in an NFE project-specific documentation tool, StarTeam. FDL is a corporate-wide documentation management vehicle that publishes information throughout the FDIC. StarTeam, designed for exclusive use by the NFE team, contains documents that are intended to represent official software baseline configuration items. Upon completion of tests, signed approvals are required from the tester, FDIC NFE Team Lead, and Business Area Representative acknowledging that conditions tested work as designed.

Additionally, the NFE project also uses a data repository, referred to as TestDirector, to separately track and record identified defects identified for a specific system and level of testing. When a defect is determined to be a change in the baseline, a change request must be completed and reviewed by the Change Control Board (CCB) for core NFE modules for interface/legacy systems. Changes are made in accordance with a defined and documented change management process for NFE core and legacy system interfaces.

III.   Detailed Findings

In the course of performing the NFE audit, KPMG provided DOF and the NFE project team with detailed findings regarding NFE test activities. The findings are summarized in this section of the report.

Finding 1: Production Simulation Tests Needed

Condition:

KPMG identified the following limitations during NFE UAT:

  • users were not adequately trained,
  • test scripts applied did not always test a representative number of chart of accounts that would be processed in a normal operational environment,
  • business processes were not tested sequentially from start to finish without interruption as in a normal workflow process, and
  • UAT does not include production simulation test activities.

Cause:

Time and schedule constraints were a contributing factor to the level of training provided. According to NFE process leads for core NFE modules, the training documentation provided detailed instructions related only to navigating through “vanilla” PeopleSoft menus and screens without explaining: (1) the purpose of PeopleSoft menus and screens, (2) their applicability to new FDIC business processes in adapting to NFE, and (3) troubleshooting problems. Additionally, time and resource constraints driven by the implementation schedule appear to have impacted the scope of tests employed, including not allocating sufficient time for production-simulation testing prior to system deployment. According to DOF and DIT officials involved in NFE project management oversight, users were involved in planning and creating test scripts for systematic formal execution in UAT, which lessens the need for additional levels of testing.

Criteria:

Guidance for Software Verification and Validation processes in NIST Special Publication 500-234, Reference Information for the Software Verification and Validation Process, and the CMMI state that the major objectives are to (1) comprehensively analyze and test software during development to determine that the software correctly performs its intended functions, (2) ensure that the software performs no unintended functions, and (3) provide information about software quality and reliability. Where possible, validation activities should be accomplished within the production environment. Additionally, JFMIP financial system implementation guidance states that qualifications testing ensures a certain level of compliance with government-wide requirements, but should be viewed as “entry criteria.” Agencies should conduct supplemental testing to ensure that a financial management system meets their specific requirements and to ensure adequate system performance.

Effect:

User training issues and gaps in testing coverage increased the possibility that NFE may not function as intended in its operational environment and that users may not be able to carry out the new business processes in NFE. Without production simulation testing, unanticipated results may not have been minimized to an appropriate level and may have caused processing delays and incomplete or inaccurate data that affect financial management reporting.

Level of Risk: High

Recommendations Provided to the FDIC for Consideration (January 26, 2005):
  •   Improve user training documentation for UAT and deployment activities to explain, where appropriate, the fields that should be completed; the fields’ applicability to new FDIC business processes in adapting to NFE; and how to troubleshoot problems.

  •   Perform unscripted user testing in the production environment to simulate “go live.”
    As a common practice, the FDIC should consider the following:

  •   Provide sufficient time prior to deployment to take corrective actions where necessary.

  •   Independently execute a few days’ transactions from the highest volume business days in the past year, and follow the new business processes and procedures.

  • Finding 2: Effectiveness of Accounting Verifications for Month- and Year-End Closings

    Condition:

    Documented evidence did not exist for accounting-based verifications performed for month- and year-end closings during SIT. Consequently, KPMG could not determine from script plans or test results the validity of month- and year-end financial information reported from the tests. Based on our review of the UAT schedule and SIT results, Asset

    Management was the only module for which reports were included as part of its month-end closing process and there was evidence of reconciliation processes. KPMG also noted that scripts on identifying or researching un-posted transactions and posting errors were omitted from SIT.

    In reviewing plans for the next level of month- and year-end tests in UAT, KPMG noted that the test plans for UAT were similar to those for SIT, with the exception that users were responsible for performing the tests. However, the test plans did not include specific instructions to perform accounting verifications and reconciliations. UAT is intended to be user-oriented; therefore, tests at this juncture should more closely resemble month- and year-end based procedures that users will actually perform. Additionally, CQMS stated in its Independent Test Plan for NFE that CQMS would not perform an independent review of the month-end and year-end test processes. According to these officials, this level of testing was outside the scope of system “integration-based” requirements testing activities to perform against NFE core financial modules and interfaces.

    Cause:

    The NFE project team had not developed formal accounting-based reconciliation test plans for month- and year-end closings. Business process documents showed only how to navigate through PeopleSoft system menus and screens. Draft user guide job aids and checklists also did not address accounting verification and reconciliation procedures.

    Criteria:

    GAO and OMB guidelines for compliance with the FFMIA state that standards and procedures should be established in maintaining financial system integrity. For month- and year-end closings, such standards and procedures would include user procedures for system performance data integrity validations, such as reconciliations between reports produced and data sets within the system and the results of validity combination and balance edits. GAO’s Standards for Internal Control in the Federal Government state that internal controls and all transactions and other significant events need to be clearly documented and maintained and should be readily available for examination in the form of management directives, administrative policies, or operating manuals.

    According to CMMI guidelines, defined and documented user-oriented procedures, when part of a new integrated information system, are an essential component of the test process to demonstrate that the system fulfills its intended use when placed in its operational environment and meets user needs.

    Effect:

    Without month-end and year-end test processes that fully incorporate accounting-based verifications and reconciliations, the FDIC may have lacked adequate assurance that financial information would be recorded accurately and completely in the general ledger. This posed significant operational risks that may have negatively impacted NFE system integrity and financial reporting capabilities in production.

    Level of Risk: High

    Recommendations Provided to the FDIC for Consideration (January 26, 2005):
  •   Fully define in business process documentation and UAT, the detailed accounting-based verification and reconciliation procedures that are required.

  •   Simulate month-end closing by following accounting-based control and reconciliation procedures to provide assurance that users would be able to process a month-end closing completely and accurately in a timely manner.

  •   Perform separate year-end testing using a copy of the production database before executing the actual 2005 year-end process in the production environment. This database should include all the required patches and fixes for the year-end process. Year-end testing should include closing for all modules.

  •   Document detailed accounting-based test results during UAT scheduled for month- and year-end processing.

  • Finding 3: Defect Consolidation, Traceability, and Documentation

    Condition:

    A centralized and controlled defect tracking system had not been established to ensure traceability to adequate documentation for the purpose of managing problem identification and resolution. In addition, test processes did not provide for two-way traceability of defects from their test origin to change requests, when applicable, and did not require that necessary documentation required for defect resolution be retained.

    Cause:

    The defect tracking tool, TestDirector, is a relatively new application development tool deployed in the last 6 months. Project time and resource constraints may have precluded the NFE project team from fully implementing TestDirector functionality. Additionally, defects relating to each interface or legacy system were managed by different business process owners, and related documentation is maintained in separate locations. Also, NFE process leads lacked adequate guidance in referencing test information to defect logs and reports. Finally, change management procedures had not been adequately followed.

    Criteria:

    According to CMMI guidance, defect analyses should include assessing the impact of defects, frequency of occurrence, similarity between defects, and the time and resources needed to resolve the defects. Proper attention should be given to storage and retrieval procedures so that data is available and accessible for analysis, possible reanalysis, or documentation purposes. Additionally, change requests should address failures and defects in the work products in assessing the impact that the change in addressing the defect will have on the work product, related work products, schedule, and cost.

    NFE change management guidance also states that changes to requirements made because of defects should trace directly to the applicable defect reports.

    Effect:

    Recurring defects were more difficult to identify, which may have impacted the ability to determine in a timely manner their priority and root cause and method for resolution. Software maintenance risks were also increased because of the inability to provide specific test reference information in the defect logs. Moreover, tracing a change request to a defect was difficult.

    Additionally, a decentralized management and documentation approach could result in inefficient retrieval or misplacement of important documentation. Delays in testing and duplicated effort could have occurred if the scripts had to be retested to obtain the necessary information for sign-offs.

    Level of Risk: Medium

    Recommendations Provided to FDIC for Consideration (February 17, 2005):
  •   Consolidate all the defects under one centralized repository to facilitate the tracking and monitoring of recurring defects going forward.

  •   Emphasize two-way traceability of defects from their test origin to change request documentation when applicable.

  •   Create a shared location to store UAT defect screen captures and test results instead of depending on users’ personal folders to improve documentation for defect problem resolution.

  • Finding 4: Effectiveness of Test Activities Performed

    Condition:

    NFE UAT documentation was not effectively organized to independently verify the accuracy and completeness of tests performed for NFE business processes and did not consistently describe the relationship of a given test script to an NFE business process.

    Test activities and scenarios were missing from sampling scripts in SIT and UAT for accounts payable, purchasing, asset management, and the general ledger. The missing items noted also appeared applicable to compliance requirements of the FFMIA. Most notably, UAT did not include testing the expenditure budget control function for the Receivership Operations module, an essential requirement to achieve compliance with FFMIA.

    Finally, according to FDIC officials, there were no plans at this time for testing about 50-70 customized FMS reports currently generated by the Walker general ledger. Access to this same information in NFE was distributed across different modules and a different chart of accounts. A strategy for obtaining these reports by priority level through NFE had not been determined.

    Cause:

    NFE business processes for test script development were not standardized and relied on the ability of the test facilitators and testers to develop appropriate plans to capture business process activities and scenarios to test. NFE project officials stated that they did not prescribe a formal structure so testers would have more leverage and flexibility in developing test plans that would best fit their development and test needs. Users for each respective area were involved in deciding which tests to perform for both SIT and UAT.

    With respect to the budget control function testing, the FDIC budget process lead indicated that this testing would be performed during UAT re-tests of core financial modules.

    Criteria:

    GAO and OMB guidance states that government financial management systems shall provide assurance that transactions can be processed in accordance with FFMIA requirements. These requirements are applicable to key FMS functions, including accounts payable, purchasing, disbursement, funds control, and general ledger processing. JFMIP guidance also states that agencies need to consider performing supplemental testing beyond qualification tests performed to ensure financial management systems meet government and organizational requirements.

    CMMI guidance on test validation practices states that UAT cases and procedures, including operational scenarios and procedures, are applicable validation procedures that warrant consideration to determine whether a system will function as intended in its operational environment. Also, NIST test guidance states that test verification and validation processes should be comprehensive. GAO’s Standards for Internal Control in the Federal Government state that control activities such as tests of transactions for system deployment should be well documented, maintained, and readily available for examination.

    Effect:

    Independent verification of the effectiveness of business process activities tested and software management oversight over those activities become cumbersome without a formal and consistently applied process that provides information on a given business process tested, activities to perform for that process, and related scenarios to test for each activity.

    Scenarios that were missing and untested may have impacted NFE system integrity in the production of accurate and complete financial management reports. Consequently, financial management and reporting risks may not have been mitigated to an acceptable level.

    In summary, the issues cited could impact NFE technical performance expectations, operational capabilities, and users’ ability to apply new business processes.

    Level of Risk: Medium

    Recommendations Provided to FDIC for Consideration (February 16, 2005):
  •   Consider some level of production simulation testing for addressing missing test scenarios and any unidentified problems not sampled.

  •   Develop more structured test development processes for better software test management and oversight of business process activities and scenarios to test.

  •   Provide for more effective test documentation retention and control practices.

  •   Elevate the priority of budget and commitment control testing to ensure that budget control features are accurately and completely tested within the appropriate time frame.


  • APPENDIX A:  OBJECTIVE, SCOPE, AND METHODOLOGY

    Objective
    The objective of this audit was to determine the adequacy of test plans and processes and defect and change management processes in resolving problems identified during testing. KPMG conducted its work audit work in Washington, D.C., and Dallas, Texas, from November 1, 2004 through March 8, 2005 in accordance with generally accepted government auditing standards.

    Scope
    The scope of coverage focused on test activities and processes critical to NFE deployment, which included evaluating the following:

    • Effectiveness of FDIC test activities policies and procedures applicable to NFE deployment.
    • Accuracy and completeness of selected NFE core module test activities that had been performed.
    • Test activities critical to final decisions on NFE’s deployment schedule that included SIT, UAT, and Independent Testing Performed by the CQMS.
    • Test activities for three of eight selected NFE legacy interfaces considered critical to deployment that included the Controls Total Module (CTM) as the primary receivership and subsidiary financial reporting system, Electronic Travel Voucher (ETV), and Dividend Process System (DPS).
    • Effectiveness of the test and defect management resolution processes. We also tested the effectiveness of the processes.

    Methodology
    KPMG performed the following in meeting audit objectives:

    • Conducted interviews with DIT and DOF officials who were responsible for managing and implementing the NFE project and with representatives from Accenture LLP, the consulting firm hired by the FDIC to provide NFE implementation services, including the performance of system development test activities. To obtain an understanding of NFE test activities that had been performed, including procedures and practices, KPMG also spoke with end users from several divisions in the FDIC’s Headquarters and Dallas office to determine the adequacy of their involvement in test activities such as UAT.
    • Identified applicable FDIC policies and procedures for performing NFE test activities.
    • Performed gap analysis of FDIC policies and procedures for NFE test activities against generally accepted system development test activities performed (see Appendix B for applicable standards and guidelines).
    • Sampled requirements for the NFE core systems and obtained related test plan and results documentation that was used in assessing the nature and extent of NFE test activities performed.
    • Obtained and reviewed the test plan, results, and reconciliation process for CTM, DPS, and ETV systems.
    • Observed UAT activities to assess the effectiveness of test activities performed for accounts payables, disbursements, purchase orders, and the general ledger.
    • Identified and reviewed monitoring and oversight activities over the test and defect management resolution processes, including change management practices for addressing requirement changes.

    KPMG also determined the risk levels for the NFE project where specific risks are likely to occur.

    Prior Audit Coverage

    Prior to this audit, the FDIC OIG issued the following reports related to the NFE.

    • Audit Report No. 05-007 entitled, Management Controls Over the Re-baselined New Financial Environment Project, dated February 18, 2005, which addressed whether the FDIC had established adequate management control over the re-baselined NFE project.
    • Audit Report No. 03-045 entitled, New Financial Environment Scope Management Controls, dated September 29, 2003, which addressed whether the FDIC had implemented adequate controls for ensuring that the scope of the NFE project was effectively managed.
    • Audit Report No. 03-016 entitled, The New Financial Environment Project Control Framework, dated March 5, 2003, which addressed whether the FDIC had established a control framework for the NFE project.
    • Audit Report No. 03-002 entitled, Preaward Review of the New Financial Environment Project, dated October 7, 2002, which provided observations on selected procedures and documents related to the NFE Request for Proposal.
    • Evaluation Report No. 01-004 entitled, The New Financial Environment Project, dated December 7, 2001, which assessed the reasonableness of the NFE cost-benefit analysis and the financial systems architecture.


    APPENDIX B:  APPLICABLE STANDARDS AND GUIDANCE

    The references listed herein represent applicable standards and guidance at the time of the writing of this document that were considered in the performance of KPMG’s evaluation. Some of the references are statutes and regulatory sources, whose provisions may or may not be binding on the FDIC; see individual references for further information. Statutory and regulatory sources that are not binding on the FDIC can provide statements of prudent business practices. The Internet sites and various references appearing below are subject to change.

    Federal Statutes

    Federal Financial Management Improvement Act (FFMIA), Pub. L. 104-208, 1996.
    http://www.whitehouse.gov/omb/financial/ffs_ffmia.html

    The statute requires agencies to implement and maintain financial management systems that substantially comply with federal financial management system requirements. These requirements are detailed in the Financial Management System Requirements series issued by the JFMIP and in OMB Circular A-127, Financial Management Systems, and OMB’s Implementation Guidance for the Federal Financial Management Improvement Act (FFMIA) of 1996. The act does not apply to the FDIC, but its provisions and standards contain prudent practices that the FDIC may choose to follow.

    Systems Development Life Cycle (SDLC) Standards and Guidance

    NIST Special Publication 500-234, Reference Information for the Software Verification
    and Validation Process
    , April 1996.
    http://hissa.nist.gov

    The publication provides guidance for performing verification and validation activities to comprehensively analyze and test software during development to determine that the software performs its intended functions correctly, ensure that it performs no unintended functions, and provide information about its quality and reliability.

    Capability Maturity Model Integration (CMMI) for Systems Engineering, Software Engineering,
    Integrated Product and Process Development, and Supplier Sourcing,
    V1.1, March 2002.
    http://www.se.cmu.edu/cmmi

    The publication provides process management guidance for SDLC projects to include best practices in performing activities related to software risk management, verification and validation, change management, and defect management resolution.

    Financial Management System (FMS) Standards and Guidance

    OMB Circular No. A-127, Financial Management Systems, July 1993.
    http://www.whitehouse.gov/omb/circulars/a127/a127.html

    The publication prescribes policies and standards for executive departments and agencies to follow in developing, operating, evaluating, and reporting on financial management systems.

    Joint Financial Management Improvement Program (JFMIP), Forum Highlights:
    System Implementation Success Factors Using COTS Financial Systems,
    JFMIP Steering Committee and Chief Financial Officers’ Council, June 2003.
    http://www.jfmip.gov/jfmip/otherreports.htm

    The publication addresses critical success factors for successfully implementing COTS software in discussions with senior federal financial managers, financial system program managers, and private sector leaders.

    OMB Memorandum for the Heads of Executive Departments and Establishments, Chief Financial Officers, and Inspectors General – Revised Implementation Guidance for the Federal Financial Management Improvement Act, January 2001.
    http://www.whitehouse.gov/omb/financial/ffmia_implementation_guidance.pdf

    Key provisions applicable to NFE testing of key FMS functions include the following:

    • FMS shall consistently process common transactions throughout the financial system and shall consistently use and apply internal controls throughout the financial system.
    • Assets shall be accounted for reliably so that they can be properly protected from loss, misappropriation, or destruction
    • Budget execution is integrated in the core financial system with accounts payable, accounts receivable, and general ledger.
    • Financial statements and other required financial and budget reports shall be prepared using information generated by the FMS.

    U.S. GAO, Core Financial System Requirements, Checklist for Reviewing Systems Under
    the Federal Financial Management Improvement Act, GAO/AIMD-00-21.2.2, February 2000.
    http://www.gao.gov/special.pubs/ai2122.pdf

    The publication addresses FFMIA system integrity control compliance requirements. Although the FDIC is not mandated to comply with FFMIA requirements, the FDIC intends to voluntarily comply with such standards. Provisions applicable to NFE testing would include, for example, the following:

    • System performance data integrity validations such as reconciliations between produced reports and data sets within the system and the results of validity combination and balancing edits.
    • Accurate and complete postings to the current and prior months concurrently until month-end closing. Accurate and complete balances must be maintained and accessible through on-line queries for both the current and prior fiscal years until year-end closing.
    • Adjustment of assets or expenses recorded with the liability if the authorized payment (based on the invoice) is different from the amount accrued (based upon receipt and acceptance) using contract information.
    • Payment Management Function (Accounts Payable/Purchasing)
    • Provides the capability to capture, store, and process appropriate invoice information in accordance with Department of the Treasury standards and, as necessary, to satisfy requirements of the Prompt Payment Act.
    • Provides the capability of splitting an invoice into multiple payments on the appropriate due dates when items on the invoice have different due dates.
    • Automatically updates funds control and budget execution balances.
    • Appropriately posts assets or expenses with the liability.
    • Funds Control Process (Funds Availability Editing)
    • Provides for on-line notification of funds availability prior to the distribution of lower-level funding and the processing of commitment, obligation, or expenditure transactions.
    • Checks commitment transactions against available funds.
    • Includes adequate controls to prevent the recording of commitments that exceed available balances.
    • Updates all appropriate accounts to ensure that the system always maintains and reports the current status of funds for all open accounting periods.

    U.S. GAO, Standards for Internal Control in the Federal Government, November 1999.
    http://www.gao.gov/special.pubs/ai00021p.pdf

    The publication defines the minimum level of quality acceptable for internal control in government and provides the basis against which internal control is to be evaluated, including documenting all transactions and other significant events. Documentation should be readily available for examination including documentation on a wide range of diverse activities, such as approvals, authorizations, verifications, reconciliations, performance reviews, maintenance of security, and the creation and maintenance of related records that provide evidence of execution of these activities. The FDIC is not mandated to but chooses to follow these practices.

    FDIC/NFE Specific SDLC Standards and Guidance

    The FDIC has issued several policies and procedures in managing NFE test activities, including changes to requirements that can be viewed within the FDIC’s Digital Intranet Library:

    • DOF Corporate Applications Testing Strategy, version 2 (3/15/2004)
    • DOF Corporate Applications SQT [system qualification test] Approach, version 4 (6/10/2004)
    • DOF Corporate Applications SIT Approach, version 3 (6/10/2004)
    • NFE User Acceptance Test Plan, version 2.0 (08/30/2004)
    • NFE SIT Test Plan, version 1.5 (06/18/2004)
    • Independent Test Plan for NFE, version 2.0 (09/22/2004)
    • NFEi Change Control Process


    APPENDIX C:  NFE TEST STRATEGY AND PROCESSES

    The FDIC is replacing the existing Walker Interactive system with PeopleSoft financial software. The implementation of the NFE will necessitate the retirement and modification of interconnecting DOF corporate applications. To promote consistency in testing for a quality product, the FDIC developed, defined, and documented testing processes for implementing NFE. The strategies, tools, and processes defined for this effort are described in this appendix.

    Testing Overview

    Testing is an essential part of the SDLC and a critical means for reducing software delivery risks. Testing is a structured way of validating that business and performance requirements, and use case specifications are properly implemented in a solution that meets a customer’s functional, technical, operational, and maintenance expectations.

    DOF testing for NFE has been divided into seven distinct stages. Each stage tests a broader level of functional and technical complexity than the previous stage. Accordingly, test conditions in each successive stage of testing are derived from successively higher-level sources. For example, “low level” unit tests are derived from the conditions specified in the detailed design, while “high level” unit tests are derived from UAT conditions from the system requirements or use cases.

    Such multi-stage testing is referred to as “V-Model” testing, which is illustrated in Figure 1. The left side of the “V” indicates the source documents from which we derive test cases. The right side of the “V” shows the stages of testing.

    Figure 1. V-Model
    V-Model-figure1
    Source: DOF Corporate Applications Testing Strategy, Version 2, March 15, 2004. [ D ]

    The V-Model requires that each major deliverable is verified and validated in an attempt to identify problems as early as possible and ensure that specifications are complete and correct and adhere to relevant standards. Testing ensures that the specifications are correctly implemented and that the solution meets the business and performance requirements or use cases.

    Test Stages

    A stage refers to major development process steps in a project’s life cycle: planning stage, requirements definition stage, design stage, development stage, test stage, etc. Also there are different stages of tests: unit test stage, system qualification test (SQT) stage, etc.

    The V-Model diagram shows how the test stages align with the development stages. The test planning tasks are performed on the left side of the V-Model within the plan, requirements definition, design, and development stages. The test execution tasks appear on the right side and belong to the development (unit test); test (SQT, performance, and user acceptance tests); and implementation (operational readiness test) stages. The early test execution tasks focus on confirming the high-level and detailed designs, and later tasks focus on achieving overall functional and technical requirements. The test stages for a project’s life cycle are described below.

    Unit Test

    The purpose of unit testing is to verify that the programming work units have correctly implemented the detailed design specifications. Programming work units are the most granular testable software components. Types of work units include windows, functions or algorithms, and simple batch programs. Every line of code should be exercised, every loop iterated, and all conditions tested at this stage. The scope of the test conditions encompasses logical branches, limits, etc. All work units developed by the development teams will be unit tested and where feasible, multiple, related work units will be tested together.

    Although a development team does not gather formal performance metrics at this stage, it is the team’s responsibility to identify components that represent significant performance risk.

    Unit testing is the responsibility of the DIT application development teams and is required for all new or modified code. Unit test planning and execution is the responsibility of the programmer who coded the module to be tested and is performed in the early part of the development stage when the detailed designs are finalized. The test execution is performed by the developer/tester (in the development environment) in the later part of the development stage after the application units are coded.

    System Qualification Test

    The purpose of SQT is to verify that each of the applications, developed or modified, functions as designed across all product business functions. SQT validates that the requirements of each application have been met.

    Unlike unit testing in which test conditions are derived based on design specifications, SQT conditions are derived from business requirements and use case events that are internal to a single application. The test team will initially use limited but realistic data, testing basic functionality and gradually build complexity into the processes, testing more realistic business scenarios with realistic data.

    SQT will be the responsibility of the respective DIT application test teams and is required for all new or modified code. The DIT test team, including DOF resources, will execute the test for each application in the separate environment so that SQT activities and code do not interfere with other activities and code in the development environment. Interfaces to external systems may not be available at this stage of testing. However, if external systems (or stub equivalents) are available, the test teams will test those during SQT. Test teams, including DOF resources, perform SQT planning in the design stage when the designs are created. The DIT test team executes the SQT in the test stage after the developers complete the unit test.

    System Integration Test

    The purpose of SIT is to ensure that all business functions can be performed on an end-to-end basis across the business applications and platforms. SIT verifies that the applications interact correctly with each other and with their external interfaces.

    Applications are eligible for migration to the SIT environment upon meeting all the defined exit criteria of the SQT and entry criteria of the SIT. The test conditions are an extension of the SQT conditions in that they will include full end-to-end business processing and verification.

    For purposes of the NFE program, SIT is the responsibility of the respective application test teams coordinating with a central NFE test team and is required for all new or modified code that interfaces with an external system. Each application team (both DIT and DOF resources) will be responsible for testing its application and interfaces. SIT planning is performed in the design stage when the high-level designs are created. The test execution is performed in the test stage after successful completion of SQT. SIT will be executed in a logically separate environment in the quality control and testing environment. It is imperative to separate the code and activities of the SQT, SIT and other test stages so that code from one environment does not interfere with code in another environment or test stage.

    CQMS Test

    The purpose of the CQMS test is the validation of the required business functions by an impartial, independent testing group prior to the UAT. This test group analyzes and assesses the application requirements and develops functional, standards, and performance tests based on those requirements. The testing group executes and reports on the tests and uses test results to determine the readiness of the application to proceed to the next phase.

    CQMS testing is the responsibility of the CQMS group and is executed in a separate environment. Throughout the CQMS test, there is an open communication line to the application project to receive requirement updates, provide frequent feedback in problem reports, and collaborate on problem investigations. Application development and test teams are not responsible for the test execution but are required to provide functional and technical support throughout the testing.

    User Acceptance Test

    The purpose of UAT is to ensure that the users and stakeholders are satisfied with the solution. Only after UAT is completed can the product be released. The UAT allows the end users to complete one final review of the system prior to its deployment.

    Applications are eligible for migration to the UAT environment upon meeting all the defined exit criteria of the SIT and CQMS tests. The test conditions can be a subset of the SIT conditions tailored for each representative user group.

    UAT is the responsibility of the respective DIT test teams with test planning and execution resources provided by the DOF end user community. UAT planning is performed in the requirements definition stage during which user requirements are defined. The test execution is performed in the test stage after successful completion of SIT and CQMS tests. Test execution of UAT will take place in the QUAL environment.

    Performance Test

    The purpose of the performance test is to ensure that the system is capable of operating at the load levels specified by the performance requirements and any agreed-upon service-level agreement. This test will be performed in the presence of any operations that could affect performance capabilities (as would occur in the production environment). These operations include batch interfaces, overnight batch runs, on-line interfaces, user interactions, etc. The DIT test team will monitor system performance across all areas of the application functionality (on-line response times, batch job schedules), servers, databases, networks, etc.

    Applications are eligible for migration to the performance test environment after successful completion of the SQT or SIT. This test may not be required for existing legacy systems that are being modified. The application project manager and DIT representative will make a determination based on the scope and extent of the modifications. If the changes are significant enough to impact service levels, a performance test will be required. Specific performance test environment requirements and assumptions will be documented in the Performance Test Approach document.

    Performance testing is the responsibility of the DIT application teams with support from the infrastructure group. Performance test planning is performed in the requirements definition stage during which the performance requirements are defined. The test execution is performed in the test stage in a separate environment that is a replica of the production environment.

    Operational Readiness Test (ORT)

    The purpose of ORT is to test the production environment’s readiness to handle the new system or changes. ORT verifies that the correct functionality, architecture, and procedures are defined and implemented to allow production support teams to run, maintain, and support the system in production. ORT may also involve verifying that the system is correctly installed and configured in the production environment.

    Applications are eligible for migration to the ORT environment after successful completion of the UAT and performance tests. ORT may not be required for existing legacy systems that are being modified. The application project manager and DIT representative will make a determination based on the scope and extent of the modifications. If the changes are significant enough to affect operational procedures, ORT will be required.

    ORT is the responsibility of the DIT application teams with support from the operations and maintenance groups. ORT test planning is performed and executed in the implementation stage.

    Testing Activities

    The application teams will execute the same steps for all stages of testing. The following activities are common to all stages of testing:

    • “Develop test approach” provides the objectives, schedule, environment requirements, and entry and exit criteria for the test stage.
    • “Plan test” identifies test conditions and test cycles for the test stage.
    • “Prepare test” defines input data and expected results, scripts the test cycles, defines stubs and job streams, and prepares the cycle control calendar.
    • “Establish test environment” ensures that the environment is established and tested before test execution.
    • “Execute test” performs the scripts contained in the test model, compares the actual results to the expected results, and identifies and resolves discrepancies.


    APPENDIX D:  RISK ASSESSMENT APPROACH

    Risk Ratings
    Per CMMI and industry standard practices, software projects should establish a risk management strategy that includes the categorization of identified risks in order to develop a mitigation strategy that reduces risks to levels acceptable to management. KPMG assessed the potential impact of risks identified in this review based on professional judgment and applicable risk management criteria defined for the NFE project by the FDIC. The NFE project assesses risks based on probability of occurrence and impact as follows:

    Probability
    The likelihood of risk occurrence is quantitatively or qualitatively rated on the following scale:

    Probability

    Uncertainty Statement

    Evaluation of Impact
    (see Impact)

    > 80%

    Extreme, Almost certain

    5

    61%-80%

    High, Likely

    4

    41%-60%

    Medium

    3

    21%-40%

    Low

    2

    1%-20%

    Very Low, Highly unlikely

    1


    Impact
    Impact is an estimate of the overall scale of the impact following an occurrence of each risk. Impact measures the severity of adverse affects, or the magnitude of a loss, if the risk comes to pass and is rated on the following scale:

      5 - Critical impact; threatens overall success of NFE on a long-term basis.
      4 - High impact; significant disruption to successful delivery of NFE objectives, products, and benefits.
      3 - Medium impact; significant disruption to NFE schedule, cost, and products over the medium term.
      2 - Low impact; progress disrupted with moderate to low extensions to schedule and cost, across short term.
      1 - Very low impact; slight exposure.

    The two variables, impact and probability, are combined to assess the overall risk category as displayed in Figure 2.

    Figure 2: Risk Assessment Matrix

    Risk Assessment Matrix same as above Table on Probability in this section [ D ]
    Source: The FDIC New Financial Environment Risk Management Plan developed by Accenture.

    Risk categorization is based on factors where specific risks are likely to occur, including resource/cost, schedule, technical, operational, and external. Overall risks assigned by KPMG focused on issues impacting the FDIC’s ability to achieve NFE objectives from both a technical and operational nature. These factors, referred to as risk drivers, may impact both Cost and Schedule risks.

    Each risk is described further below:

    Technical
    Technology-based risks consider the non-achievement of the application specifications and benefits expected. These risks include new and non-standard platform technology, integration problems with existing systems, migration problems, performance expectations not achieved, environment complexity and functionality, and system operability.

    Operational
    Operational-based risks focus on the peripheral organizational and business operational re-engineering changes arising from the NFE implementation effort. These risks consider both the transitional and the long-term effects of the NFE’s introduction, including the organizational and behavioral changes required, the human and physical resource planning, and communication required to facilitate a smooth transition to the new structure.

    External
    External-based risks consider the environmental factors largely outside of the control of the NFE Project Management that can directly or indirectly affect the successful delivery of the NFE. Risks arising from legislative regulations, legal requirements, and the strategic direction and priority conflicts of a controlling body are profiled under this category.

    Resource/Cost
    Cost-based risks outline the non-achievement of the financial benefits of NFE. These cost risks include additional costs in changing or solving design, application program, or operational problems.

    Schedule
    Schedule-based risks focus on the non-achievement of the biggest system benefits within the specified time frame. These schedule-based risks arise from extensions as a result of scope changes, resource unavailability, and additional schedule extensions for solving the risks as discussed earlier in Resource/Cost.



    Part II

    Corporation Comments and OIG Evaluation



    CORPORATION COMMENTS AND OIG EVALUATION

    The report contains one recommendation directed to the Director, DOF, and to the NFE project management team. The Director, DOF, provided a written response to the draft report on May 18, 2005. Management’s response is presented, in its entirety, beginning on page II-2. DOF management concurred with the recommendation. Based on management’s response, the report’s recommendation is considered resolved, disposition, and closed. DOF’s response to the report recommendation is summarized below, along with our evaluation of the response.

    Recommendation: KPMG recommends that the Director, DOF, and the NFE project management team review the risks identified and develop a risk resolution and action approach in accordance with the risk mitigation procedures outlined in the NFE risk management plan.

    DOF Response: DOF concurs with the recommendation to review the risks identified in the draft of the report and to develop a risk resolution and action approach. As OIG findings and recommendations were received, NFE project management reviewed and discussed the input in light of overall project risks and other mitigating efforts. Using the Summary of Findings table in the draft of this report as a guide, NFE project management developed a risk assessment matrix, submitted with DOF’s response, to summarize management’s conclusions of the identified conditions. DOF also responded that the NFE control framework afforded a high degree of confidence that a “go live” decision was appropriate under the circumstances.

    OIG Evaluation of Response: The risk assessment matrix summarizes the risk resolution and action approach for the conditions discussed in the report. The action effectively implements our recommendation. We consider the recommendation resolved, dispositioned, and closed.

    The risk assessment matrix also includes updated Level of Risk information showing that the high and medium risks of reported conditions have been reduced since the completion of field work on March 8, 2005. The risk levels could have been reduced through additional testing in response to our early notifications to the NFE management team regarding potential weaknesses and through additional planned tests to be conducted after audit field work. However, we did not perform additional work to validate the Level of Risk information.



    CORPORATION COMMENTS

    CORPORATION COMMENTS, page1
    [ D ]

    CORPORATION COMMENTS, page1
    [ D ]

    CORPORATION COMMENTS, page1
    [ D ]

    CORPORATION COMMENTS, page1
    [ D ]

    CORPORATION COMMENTS, page1
    [ D ]

    CORPORATION COMMENTS, page1
    [ D ]

    CORPORATION COMMENTS, page1
    [ D ]



    MANAGEMENT RESPONSE TO THE RECOMMENDATION

    This table presents the management response on the recommendation in our report and the status of the recommendation as of the date of report issuance.


    Corrective Action: Taken or Planned/Status Expected
    Completion Date
    Monetary Benefits Resolved: [ a ] Yes or No Dispositioned: [ b ] Yes or No
    Open or Closed [ c ]
    As OIG findings and recommendations were received, NFE project management reviewed and discussed the input in light of overall project risks and other mitigating efforts. DOF reviewed the risks identified in the draft report and developed a risk assessment matrix to summarize management’s conclusions of the identified conditions.         Completed      N/A        Yes        Yes      Closed
    a Resolved –
    (1) Management concurs with the recommendation, and the planned corrective action is consistent with the recommendation.
    (2) Management does not concur with the recommendation, but planned alternative action is acceptable to the OIG.
    (3) Management agrees to the OIG monetary benefits, or a different amount, or no ($0) amount. Monetary benefits are considered resolved as long as management provides an amount.

    b Dispositioned – The agreed-upon corrective action must be implemented, determined to be effective, and the actual amounts of monetary benefits achieved through implementation identified. The OIG is responsible for determining whether the documentation provided by management is adequate to disposition the recommendation.

    c Once the OIG dispositions the recommendation, it can then be closed.

    Last updated 8/03/2005