Monday, June 23, 2014

How to Conduct a Quality Assessment Under a Short Timeframe With Limited Resources

As part of its service offering, Prolifics provides clients (or prospective clients) an assessment of their current quality processes. Recently, Prolifics engineers were requested to go onsite at a client location to render such an assessment of a complex, multi-tier, application. However, the client would only agree to a very short length of time for the assessment.  Prolifics accepted this challenge – knowing that it would be a demanding project. This brief article describes, at a high level, the approach taken, the constraints that we worked under and the end result.

Rapid Assessment Approach
For such an ambitious project, every hour of time is important. Thus, it is indispensable to initiate the project with remote outreach to the client, even prior to going onsite. Gather as much essential information as possible. This advance work can take the form of reviewing any existing documentation about the client and the systems under test. Once the early information is reviewed, it can in turn prompt you for additional items to request, again in advance of going onsite, or during that first day or two. For the assessment at this client, we reviewed relevant Prolifics correspondence and other documentation between the client and the quality practice. There was recent documentation from an initial round of discussions which included both the Head of the Testing Practice and the Director. We also held pre-assessment conference call discussions with the Sales team, who provided helpful insights into the client organization.

We then requested the client to provide an initial set of test plan documents and test cases. We also requested high level information on system architecture. Every day of “front loading” the process advances the project to the engineer’s advantage. Thus, early on in the project we obtained a basic sense of how the client’s QA team conducted testing work within their test cycle.

Also, to address items which could constrain us, we made requests for client network access and test case and defect tracking repositories.  Both these items (network access and test case/defect tracking system access) are typically “long lead time” items which can further constrain the assessment schedule. Therefore it is important to request these in advance of going onsite.

The day before our client visit, we developed a preliminary project plan, together with milestones and dates that we wanted to achieve for the assessment.

Selecting a Methodology:
Various options are available for assessment methods. It is important to choose a methodology which provides as much quantitative and qualitative measurement as possible, in the time allotted. So, the assessment method must do the following:

  • The methodology must be comprehensive enough to cover testing processes both in breadth and in depth.
  • It must be flexible in order to adapt to the constraints of limited time and resources.  
  • It needs to be sufficiently objective such that it has credibility with the client.

With the Practice Head’s guidance, we utilized a Test Process Improvement (TPI) methodology to assess this client. The Prolifics version of TPI measures up to twenty (20) dimensions of testing effectiveness. In the brief two weeks that we had for this assessment, we had to select which of those twenty dimensions to assess. It came down to a combination of the most important dimensions of the testing process and the most available to measure.

Conducting the Assessment
Knowing that we had just two weeks for the assessment, we placed much energy into “front loading” our activities during the first week. This involved meeting the key managers and leaders of the client.  We scheduled a round of one-on-one interviews with those individuals.  It is important to plan these interviews in advance of the actual meetings; plan the interview questions and carefully document the answers. Our interviewees were:  the Business Owner, QA Manager, QA Team Leads, Development Manager and Leads, Project Manager, Business Analysts and test automation staff.

In order to fully carry out our assessment, we made follow-up requests for information we had not yet received, plus requested system requirements, specifications and documentation. Various of these items were identified as useful during those interviews and meetings.

Data Gathering and Analysis
Reviewing test artifacts (Test Strategies, Test Plans, Test Cases) is a natural place to look and evaluate.  However, system requirements provide the initial “blueprint” for understanding the application under test. While clients may at first be reluctant to disclose requirements or design documents, it is essential to review a sufficient sample of requirements in order to comprehend what the application is intended to do. Then, proceed to interpret the Test Plans, cases, etc. in light of the requirements.

From there, defect reports provide the essential, interlocking pieces of information. Though much can be gleaned from reviewing defect reports, a more complete interpretation is possible when requirements, test strategies, plans and test cases are first considered.  Then, defect reports can be assessed fully illuminated.

Having requirements and artifacts, we began preliminary analysis and compilation of the data.  Day after day, we added incremental data, performed further analysis and began to identify deficiencies, patterns and gaps in the quality processes.  For areas of significant concern, we continued to gather even more data to strengthen and validate our analysis.

Frequent Communication
We decided from the outset to mandate frequent communications among the assessment team. This included the engineers onsite, our project manager, the practice director, sales team and project manager.  At designated intervals, and on as as-needed basis, we also included the client executive.
Our communication included daily conference calls between the above members, followed by daily email meeting minutes mailed out to the team, plus certain Prolifics executives. There were other communications (follow-up meetings, email, etc.) as needed to keep the assessment moving forward and deal with impediments. This all had the effect of transparency, delineating our progress and identifying potential problems. Trust among team members grew and helped this ambitious schedule proceed with less stress.

Interim Progress Briefing
At the end of week one, we delivered an abbreviated status presentation to the client. This helped us to clarify that we were on the correct path for this assessment. If there were areas to go deeper, or areas that did not require much additional data gathering, this briefing helped to clarify matters. It also provided the opportunity to deliver preliminary findings that our analysis was surfacing.
Although our presentation was intentionally limited in content, it did serve to firm up both our (Prolifics) and the clients sense that the assessment was proceeding in the correct direction and making meaningful progress.

It is important that this particular briefing be a formal, scheduled milestone event. This gives the assessment team a midpoint goal to achieve. The client also has the opportunity to receive preliminary results, ask questions and offer suggestions and guidance. This interim briefing surfaced no surprises.  We confirmed that we were on the right track and that progress was being made. Our client indicated they were looking forward to the final report. We took this as a good sign.

The Second Week
After the interim briefing, we exerted ourselves to gather additional information. We performed second round interviews with client staff where needed. Then we went deeper into client test artifacts and the TestWare systems. Reviewing the way quality information is structured and organized and comparing that to release requirements is very useful aspect of an Assessment. Patterns became clearer, as did any gaps or deficiencies.

While keeping good notes and other documentation is needed throughout the assessment, it is vital to begin to outline the assessment report at the earliest possible time. Ideally, you would want to start the report outline as the data begin to form facts – then as facts emerge into findings.

Then, we circulated drafts of the assessment as early as practical within the quality practice.  As is human nature, much of the feedback on the draft assessment report arrived late – bumping right up against the deadline. Fortunately, we had incorporated many early comments and suggestions in previously. So we were able to accommodate significant last day revisions.

Delivering the Assessment
The act of delivering an assessment report to the client does not simply occur at the conclusion of the project.  It is a cumulative effort that actually begins days before.  An assessment is like a “report card” to the client.  More often than not, it will convey information that is critical of their processes.  Providing negative information must be done with sensitivity.  Working under ambitious deadlines can be stressful to all involved.  Adding negative findings into the mix can lead to unpredictable results.  Therefore, these dynamics must be anticipated and managed.

As described earlier, we built in frequent, open communication within the project. We approached the assessment in a balanced and fair way with all of those involved.  Client stakeholders were hearing about our day to day activities from their staff; it is essential that feedback be positive. Objectivity is also important. Thus, we presented positive findings along with critical, or negative, findings. All of this fosters trust.

The final presentation culminated with an onscreen presentation for the client. \ As the Assessment report was lengthy, was delivered the presentation as an executive briefing, with a summary up-front. The client requested a preview of the following detailed section, which we were happy to provide. Thereupon, the client asked questions and our answers were crisp and to the point. This laid the groundwork for the client to read the entire presentation.

Our client recognized the hard work that went into a very short two week engagement. \ It was evident that our approach was objective and that the results had integrity.

To learn more about Prolifics' testing practice, visit our website.

Dex Malone is a Delivery Manager at Prolifics specializing in Independent Validation and Verification (IV&V). With over twenty-five years’ experience, Dex specializes in large, complex IT systems.  He has worked in Quality Engineering leadership roles in software development across various industries.  These include regulated environments in healthcare, telecommunications, general business, banking and finance, across both public and private sectors.  His interests are in software security and privacy.  When not in front of a computer, he can be found with family, in the mountains or at the shore.  


Thursday, June 12, 2014

Prolifics Employee Showcase: Alan Shemelya, VP of Healthcare Business Development

Alan Shemelya is the Vice President of Healthcare Business Development and is responsible for growing Prolifics’ healthcare services and solutions for customers around the world. With more than 30 years of experience and thought leadership in the Healthcare industry, Alan has accomplished an impressive list of milestones over the course of his career and is passionate about not only helping businesses achieve success but also making significant strides in patient care.

Commitment to Healthcare
Alan’s passion for the healthcare industry began as a young child after a trip to the hospital. His interest grew in college when he wrote a thesis on healthcare finance. During this project, he realized that hospitals were manually driven in just about everything they did on a daily basis and were operating 10 years behind. From registration to billing, automated processes simply did not exist.

It was at this time that Alan recognized an opportunity in the industry. Throughout his career, his work has always been driven by a consistent mission: finding ways to improve the care of sick children.

The Road to Prolifics
Alan joined the Prolifics team in March 2014. Over his career, he has worked with a number of leading healthcare information technology (HIT) and consulting companies in Sales, Product Development, Marketing and Implementation of EMR, Revenue Cycle and ERP solutions. Prior to joining Prolifics, Alan provided his leadership and expertise at HCA/Parallon, Xerox/ACS, Allscripts and McKesson.

Alan is driven by his motivation to provide innovative IT solutions for the provider and payer markets, specializing in improving patient, provider and payer communications, business process enablement, and outcomes. With Alan’s expertise in the healthcare industry, experts will have the experience, skills and knowledge necessary to deliver in this highly complex industry, aligning innovative and proven solutions to solve clients’ business goals.

Innovation & Thought Leadership
Alan has written articles on the redesign of business processes and advancement in technologies that enable the automation of workflow decisions impacting both providers and payers. He has been a featured speaker with the Southern Healthcare Administrative Regional Process (SHARP) and has been on the National Speakers List for the Healthcare Financial Management Association (HFMA). Below is a list of articles that Alan has published throughout his career.

HFMA ANI, 2009
HFMA Georgia Meeting, 2010
HFMA Patient Friendly Billing – Accessibility of Data

The Future is Bright
When asked, Alan selected the following three words to describe Prolifics: agility, intelligence, responsibility. He finds these characteristics as key drivers for success in building innovative healthcare IT solutions.

A Recent Journey
Alan recently participated in the AIDS/LifeCycle 2014 bike ride. This is a fully supported, 7-day bike ride from San Francisco to Los Angeles designed to raise money and awareness in the fight against HIV/AIDS. This event delivers a life-changing experience for thousands of participants from all backgrounds and fitness levels united by a common desire to do something heroic. Prolifics proudly supports Alan and this worthwhile cause!



Alan Shemelya is the Vice President of Healthcare Business Development at Prolifics. He is responsible for growing Prolifics' healthcare services and solutions for customers around the world. Alan has 30 years of experience and thought leadership in the Healthcare industry, specializing in Sales, Product Development, Marketing and Implementation of EMR, Revenue Cycle and ERP solutions. As an industry expert in the healthcare financial arena, Alan has written articles on the redesign of business processes and advancement in technologies that enable the automation of workflow decisions impacting both providers and payers. He has been a featured speaker with the Southern Healthcare Administrative Regional Process (SHARP) and has been on the National Speakers List for the Healthcare Financial Management Association (HFMA).


Wednesday, June 11, 2014

The Web Speech API and WebSphere Portal 8

The evolution of speech recognition software has come a long way. Companies like AT&T Bell Labs, IBM, and Nuance Communications are leading speech recognition experts. Commercialized software like IBM ViaVoice and Dragon NaturallySpeaking were game changers in the speech recognition software industry. The backbone technologies in speech software (e.g Hidden Markov Models, noise filtering technology, acoustic signal processing, etc.) that were originally developed decades ago, but are finding their application today in products and services like gaming consoles, smartphones, customer help centers, and infotainment systems. The adoption of speech recognition technologies is becoming an important part of the way we live. The application of the technology is crossing multiple industries; from enforcing traffic laws, to educational training, and medical transcription. The pervasiveness of speech-enabled products and services will likely lead to further innovation and refinement. This article will show you a rudimentary way on how to bring speech recognition to your web experience using a few lines of JavaScript, CSS, and HTML code in the WebSphere Portal 8 environment.

By leveraging the JavaScript API defined in the Web Speech API specification, we’re able to tap into the browser’s audio stream to transcribe speech to text. Currently, the only browser that supports the Web Speech API specification is Google Chrome. As a note in the cited W3C specification, the Web Speech API specification is not a World Wide Web Consortium (W3C) standard nor is on track to becoming a standard.

To enable speech recognition in our browser, there are a handful of events and attributes we need to handle and define in order to make this functional. The start, end, result, and error events will provide us the triggering points needed to initiate and terminate our speech recognition feature on the web page. There’s also a few attributes we need to set to assist us in the transcription process.  

Explanation of Core JavaScript Functions and Attributes
For this implementation, the anonymous functions correlating to the start, end, result, and error events defined in the Web Speech API Specification are defined below. The triggering points for start and end events are through button clicks; the end event could also be triggered by non-detected speech or if there’s an error detected by the speech recognition service (in our case, the service hosted by Google.)
In the example code below, we adapted the W3C Web Speech API specification sample code as our foundation:






The SpeechRecognition (e.g. webkitSpeechRecognition) JavaScript object attributes that are utilized in this example are defined below:
  • continuous: Sets the behavior for discrete or continuous speech
  • interimResults: Indicates if an interim transcript should be returned 
  • lang: Defines the language of the recognition

Please reference the Web Speech API specification for a complete list of method, attribute, and event definitions.

Enabling Speech Recognition in WebSphere Portal 8
Copy the images, CSS, and JavaScript files from the “SpeechAPI.zip” file to its corresponding Portal theme static resource folders (e.g. <Theme>/images, <Theme>/css, <Theme>/js).

Enabling Speech Recognition in WebSphere Portal 8

Copy the images, CSS, and JavaScript files from the “SpeechAPI.zip” file to its corresponding Portal theme static resource folders (e.g. <Theme>/images, <Theme>/css, <Theme>/js).


For a quick test, Edit the “search.jsp” file from “<WPS_HOME>\PortalServer\theme\wp.theme.modules\webapp\installedApps\ThemeModules.ear\ThemeModules.war\themes\html\dynamicSpots\modules\search\”:
Include the following JSTL variable declarations at the top of the “search.jsp” file
<!-- START: Speech Recognition JSTL variables -->
<c:set var="sr_basePath" value="/wps/mycontenthandler/dav/fs-type1/themes/portal8WebSpeechTheme"/>
<c:set var="sr_imgPath" value="${sr_basePath}/images"/>
<c:set var="sr_cssPath" value="${sr_basePath}/css"/>
<c:set var="sr_jsPath" value="${sr_basePath}/js"/>
<!-- END: Speech Recognition JSTL variables -->  

Include the following HTML Input-element after Input-element (id="wpthemeSearchBoxInput") of the “search.jsp” file
<input class="wpthemeSearchText" id="sr_microphone_button" type="button" title="Click to start speaking" alt="Microphone Off" style="width: 22px; height: 22px; vertical-align: middle; background-image: url('${sr_imgPath}/microphoneOff_22pxs.png');" onclick="WebSpeechHelper.prototype.toggleStartStop(event, stateInfo)">

Include the following HTML snippet right after the last <div> in the “search.jsp” file
<!-- START: Speech Recognition HTML -->
<div id="sr_webSpeechAPIContainer">
<!-- Pull in Speech Recognition Resources -->
<LINK rel="stylesheet" type="text/css" href="${sr_cssPath}/speechRecognition.css">
<SCRIPT src="${sr_jsPath}/speechRecognition.js"></SCRIPT>
<div id="sr_results" class="sr_results">
<span id="final_span"></span> 
<span id="interim_span"></span>
</div>
<SCRIPT>
var jsonText = '{"parameters":[' +
'{'+
'"locale":"en-US",'+
'"imagePath":"${sr_imgPath}",'+
'"microphoneOnImage":"microphoneOnAnimated_22pxs.gif",'+
'"microphoneOffImage":"microphoneOff_22pxs.png",'+
'"microphoneDisabledImage":"microphoneDisabled_22pxs.png",'+
'"microPhoneButtonID":"sr_microphone_button",'+
'"finalSpanID":"final_span",'+
'"interimSpanID":"interim_span",'+
'"searchBoxID":"wpthemeSearchBoxInput"'+
'}'+
']}';
var jsonObj = JSON.parse(jsonText);
var stateInfo = new SpeechStateInformation(jsonObj);
WebSpeechHelper.prototype.initializeVoiceRecognition(stateInfo);
</SCRIPT>
</div>
<!-- END: Speech Recognition HTML -->


*** Please reference the “search.jsp” file, included in this blog, to verify that the code-placement is correct. ***

After making the modification to the “search.jsp”, log into WebSphere Portal with Google Chrome (version 25 or above) to see the microphone image appear on the left of the search-icon as such:



By clicking on the microphone image, the browser will ask you to “allow” microphone usage. The microphone image will animate when it’s enabled. In this example, I searched for “web content manager”. (Note: Click on the native search button after you’re finished speaking)




Summary
This article we hope has helped introduce the possibilities of adding speech capabilities to the user experience in WebSphere Portal 8 and beyond. The referenced example and resources should help in getting started with exploring this emerging trend in voice enabled inputs in the digital experience. Many possibilities exist for reacting to and enabling voice input and interactions with the IBM Digital Experience platform. We look forward to increasing browser support and adaption in the near future. The solution presented here could also be abstracted into a portal theme module for re-use with the module theme framework. For the purpose of illustrating the voice API integration presented here we have not packaged the examples in the Portal 8 modular theme contributions as we normally would. A non-illustrative solution would contain the modular theme contribution configuration so that resource aggregation and minification are addressed and conform to Portal 8 best practices.



Richard Yu is a Senior Consultant at Prolifics with over 11 years of experience in application design, development and migration. He has worked on projects of varying complexity and magnitude in healthcare, manufacturing, and government . He holds two Master's degrees from NYU Polytechnic School of Engineering and a Bachelor's from Stony Brook University.

Data Integration Platform Accelerator - Talend ETL Tool

Business Challenge

Data integration and collaboration between enterprise applications is a key factor for any organization. The most important aspect is to ease the seamless integration of data and proper migration of different data sources and effective collaboration of multiple Enterprise Applications. The core problem is to address the integration needs and challenges of Enterprise Applications which are built on PostGres database.

The following are the key problems that need to be addressed:
  1. Effective integration between multiple Enterprise Applications
  2. Collaborative seamless data integration between various Enterprise Applications
  3. Provide common source for various BI reports
  4. Proper data migration of different kinds of data sources
  5. Replication across multiple systems

Solution Approach

To address the problem the proposed solution approaches are:
  1. Trigger Based Approach - Talend CDC
  2. Using xmin – Postgres CDC
  3. Trigger & xmin - Slony-I replication
  4. Using WAL & Trigger - Streaming Replication : PostGres 9.0 onwards

1. Trigger Based Approach - Talend CDC:

The approach using Trigger base – Talend CDE:
  • Creates change tables for each source table which needs to be watched
  • Triggers store the primary key of the changed record
  • Triggers store metadata of the transaction in the change tables
    • PK (Primary key of the transaction)
    • Change Type (Update / Insert / Delete)
    • Changed By
    • Changed Time
  • Talend CDC component extracts the changes from the change tables
    • Follows a publish / subscribe based model
    • Provides a view to extract from the source table based on the PK
    • Left-join with the source table
    • Provides the latest updates
    • Maintains its own control tables

2. Using xmin – PostGres CDE:

The approach using xmin – PostGres CDE:

  • Maintains configuration on which tables need to be watched
  • Extracts the xmin for the tables to watch and stores in control tables
  • Creates a generic view to extract the changes that happened after the last extract based on xmin

Advantages:

  • Non-Intrusive approach over the source schema
Disadvantages:
  • Indexes are not available over xmin
  • Wrap-around problem
  • Cannot track deletes
Metrics:
  • On a simulated environment time to extract on table with more than 500K rows on a 4GB RAM 2 Core VM takes around 4-5 seconds
    • This would grow as the number of rows increases

3. Trigger & xmin – Slony I  Replication:

The approach using Trigger &xmin – Sloney I Replication:

  • Master / Slave replication
  • Based on triggers on Master or Origin node
  • Publisher / Subscribe model
    • Stores transactions in sl_log_1 / sl_log_2 tables
      • Stores TableId
      • Stores TransactionId
      • Stores Audit info
  • Creates Sync Events for subscribers
  • Stores sync events in sl_event table
    • Slony daemons on subscribers pull the events
    • Get the transactionid for that event from the log tables
    • Extract and load the data in the slave schema
  • Have Talend CDC point to the slave and extract the changes based on the trigger based approach

Advantages:

  • Although xmin is used internally to detect and  replicate the changes, the wraparound issues are handled by slony
  • Replication Set allows to replicate only certain tables not the whole schema
Disadvantages:
  • Pulling changes/modified data over wire may cause delay
  • Changes to the source schema
Metrics:
  • Accounting the existing stats by setting the SLON_DATA_FETECH_SIZE is between 100 to 400 the slony replication has good response times
  • As the triggers would be on the slave it would be faster for talend CDC extraction as they are based on PK’s



4. Using WAL & Trigger - Streaming Replication : PostGres 9.0 onwards

Limitations with this approach:

  • WAL-based replication requires that all databases use identical versions , running on identical architectures
  • WAL-based replication duplicates the model
  • Do not have the ability to have specific updates on Target Schema
  • Synchronous Replication based on 2-Phase commits


Proposed Solution (Talend Tool)

Based on all the above approaches, Trigger based approach seems to be the best option with using Talend CDC. The main advantage with this approach is the efficient retrieval you receive from source based on indexed columns.

Overview
Talend is an Open Source Integration Software Company that provides open source middleware solutions that enable organizations to gain more value from their applications, systems and databases. Shattering the traditional proprietary model, Talend democratizes the integration market by providing enterprise-grade open source technologies that cover both the data integration and application integration needs of organizations of all sizes.

Talend's unified integration platform addresses projects such as data integration, ETL, data quality, master data management and application integration. With their proven performance, user friendly, extensibility and robustness, Talend's solutions are the most widely used and deployed integration solutions around the world. Talend is a tool for people who are already making a Java program and want to save an abundant amount of time with a tool that generates code for them. The simple DW Architecture is below.

Talend Features:

  1. Collaborative Data Integration - Talend’s data integration products provide powerful andflexible integration, so that firms can stop worrying about how databases and applications are talking to each other. Thus providing them with the ability to maximize the value of using their data.
  2. Transform and Integrate Data between Systems - Talend’s data integration products provide an extensible, highly-performant, open source set of tools to access, transform and integrate data from any business system in real time or batch to meet both operational and analytical data integration needs. With over 450 connectors, it has the ability to integrate almost any data source. The broad range of use cases addressed include: massive scale integration (big data/ NoSQL), ETL for business intelligence and data warehousing, data synchronization, data migration, data sharing, and data services.
  3. A Comprehensive Solution - Talend provides a Business Modeler, a visual tool for designing business logic for an application; a Job Designer, a visual tool for functional diagramming, delineating data development and flow sequencing using components and connectors; and a Metadata Manager, for storing and managing all project metadata, including contextual data such as database connection details and file paths.
  4. Broad Connectivity to All Systems - Talend connects natively to databases, packaged applications (ERP, CRM, etc.), SaaS and Cloud applications, mainframes, files, Web services, data warehouses, data marts, and OLAP applications. It offers built-in advanced components for ETL including string manipulators, Slowly Changing Dimensions, automatic lookup handling and bulk loading. Direct integration is provided with data quality, data matching, MDM and related functions. Talend connects to popular cloud apps including Salesforce.com and SugarCRM.
  5. Teamwork and Collaboration - The shared repository consolidates all project information and enterprise metadata in a centralized repository shared by all stakeholders: business users, job developers, and IT operations staff. Developers can easily version jobs with the ability to roll-back to a prior version.
  6. Advanced Management and Monitoring - Talend includes powerful testing, debugging, management and tuning features with real-time tracking of data execution statistics and an advanced trace mode. The product incorporates tools for managing the simplest jobs to the most complex ones, from single jobs to thousands of jobs. Processes can be deployed across enterprise and grid systems as data services using the export tool.
  7. It uses a code-generating approach. Uses a GUI, but within Eclipse RC.
  8. It generates java or Perl code which can run on any server.
  9. It can schedule tasks (also with using schedulers’ like cron).
  10. It has data quality features: from its own GUI, writing more customized SQL queries and Java.



Figure 2: Typical enterprise data integration model

Advantages of Using Talend Tool

  • Talend is used in an application to retrieve and transform the data across multiple systems at enterprise level. 
  • With Talend Centralize data integration, enrichment and distribution is made easy
  • Capable of generating multiple BI Reports using unified transactional data model
  • Ability to configure the Talend Jobs to pick data from a certain period of time.
  • All such data flows are registered with an approval process for new ones, with special emphasis on extra-company flows
  • Metadata-driven architecture to ensure that critical data elements and transformations are documented, controlled (e.g., sensitive data) and kept in sync with implementation
  • Talend Slony replication is used to replicate data across multiple enterprise systems
  • Talend CDC is used to create Triggers and views on the source database to pull the changed data and insert / update the Target Database.

Value Addition using Talend Open Source Tool

  • Talend Open Source Tool enabled development for centralized location of data for various enterprise applications
  • With Talend Open Source we could achieve easy transactional data transformations across systems
  • Simplified tool to develop transactional data jobs
  • Insulates clients from changes in transactional systems
  • Reduces modelling and data quality effort downstream, e.g., at DW/BI for conforming dimensions, de-duping and resolving inconsistent reports
  • Provides good turn around on data retrievals
  • Provides source data for operational reporting
  • Talend CDC enabled Rapid Application development 
  • Slony enabled smooth replication of data

Enterprise Entitlement Engine and Framework

Overview
A key goal of business technology systems is to ensure that the right people have access to the right information at the right time. Entitlements Engine is a fine grained authorization engine that externalizes, unifies, and simplifies the management of complex entitlement policies—strengthening security and compliance, improving IT efficiency, and enhancing business agility. These authorizations may be used to protect the most fine grained business or IT concept. Many organizations look into this as high prioritized need and to be managed by Centralized Application/Tools for proper Authentication and Authorizations.

Essentially, systems accomplish these requirements by enforcing a set of policies that regulate the behavior of system components and resources to match the “access” profile of the user accessing the system. At the most abstracted level, the system forces a user to specify and verify who they are (authentication) and then limits resources that can be accessed or manipulated by the user (authorization). Policies and rules govern each of the two facets. These two components of access management impose different types of challenges and requirements.

Authentication – establishing and validating identity. In most cases user ids and passwords presented at login screens/forms suffice.
Authorization – what information is a user permitted to access and manipulate – can impose very complex requirements.

The Entitlement Engine will be a critical enterprise component that addresses the requirements for fine-grained, context-sensitive authorization requirements. Authorization needs are not hard coded into applications, but rather specified as “configuration” in a UI provided by the Entitlement Framework. It is intended to be “application context aware”, thus providing a means to express very fine-grained authorization requirements to the system. It integrates as a service layer with the application, providing loose-coupling. In addition, it can be integrated within an application’s presentation and validation framework to eliminate screen-at-a-time integration effort – making the execution seamless to developers.

Concepts
Authentication: Before a user or another system can access any resource managed by the system, the requesting entity must establish and “authenticate” their identity.  At a high (simple) level, this process is implemented using one of several authenticating forms (certificates, logins, biometrics, etc.), depending on the context of the request and the requirements of the established policy. Most commonly, a user provides credentials (user-id, password) at “login”. If the provided credentials meet the security requirements, system can proceed with the identity of the validated user. The authentication process is supported by a multitude of rules and policies (e.g., password rules, expiration policies, failed attempts, etc.) that guard against users (and other systems) trying to gain unauthorized access.

In Current Entitlement Engine Authentication is not addressed as this is maintained by other Third party tool.

Authorization:  In an organization or business unit there are people in different roles who are required to perform specific tasks – but not authorized to perform other tasks. Enterprise applications and resources facilitate these individuals in performing their job functions efficiently and effectively. As these applications and resources will be accessed by people with different levels of authorization, applications require the capability to provide the necessary restrictions based on the role the user has.

Role Based Access Control (RBAC):   
The first part of the authorization approach relates to restricting system access to authorized users based on the role they have. Thus, authorization is expressed as permission sets based on roles.
RBAC is considered “Coarse-grained” authorization and is used to define broader-level functionality (features or resources) a role can access. Users are assigned one or more roles. When that user logs in to a particular application, the application can determine what resources (menu items, screens, etc) that user can access, based exclusively on their role.

Limitations:
At this level, we cannot define the “context” in which the feature is being accessed and, therefore, not specify a specific permission set for a “context”. For example, the authorization policy – that a Credit Reviewer cannot approve a Commitment – can be expressed and executed just based on a user’s role.
We also do not have a way of creating a profile based on the user skill set and assign permissions that are based on attributes other than role.

Attribute Based Access Control:  
The second part of the authorization approach augments RBAC capabilities to allow policies based on attributes of a user (e.g., skill, age, etc) and/or the environment (e.g., time, network, etc). While this notion can be extended to additional attributes (like application or business object), there is no structured (or simple) mechanism in off-the-shelf products to facilitate accessing these attributes or to define policies applicable to these attributes. Further, these tools do not support out-of-the-box facilities to help integrate with applications to execute policies in a runtime environment. Managing fine-grained control in a flexible manner while lowering the cost of delivery and maintenance requires the framework to be “context aware”. It must support exposing application and object attributes (e.g., screens, forms, button in an application or loan amount, LTV, etc.) at definition time so that policies for these (and other) attributes can be defined for profiles and roles. In addition, the framework needs to support convenient integration with the application framework.  Moreover, the architecture of the framework must allow efficient execution of privileges at runtime so that the system will scale with high volumes.

3. Objectives
The objectives of the proposed Entitlement Engine are:

  1. Fine grained authorization which gives the flexibility in defining the permissions based on a context for a specific role or profile.
  2. Policies can be defined with relevance to context. 
  3. To provide centralized way of defining and evaluating policies based on the application, object, roles, profiles and resources. Changes are achieved through “configuration”.
  4. To integrate with the application framework and perform efficiently, since policies are defined as values, ranges and lists for attributes and not in a computer language.




4. Solution Approach
The approach proposed here it to develop and deploy an Enterprise Entitlement Engine and Framework that specifically meets the sophisticated needs of the organization. Below is the level of access or security control that Entitlement Engine is proposed to have.


Traditional IAM tools like (Oracle Enterprise Manager, IBM TIM and TAM, WSO2 Identity server) are good in providing a centralized way of access management at the enterprise resource level (show a form or not show a form) but not for managing “sub-resource” level details and privileges. They lack a centralized way of access management at the Object, Record and Field level, particularly in the context of an application that manipulates these objects. 

The proposed Entitlement Engine provides some critical features to meet the necessary requirements.
  1. A business user interface to define profiles and associate access to the lowest level (field or data in a field) per application. 
  2. Entitlement engine provides the permissions for a specific role or profile based on the object hierarchy for the application once the user logs in. 
  3. As the permissions need to be evaluated for all the users who login the engine would be performance sensitive and would need to bring all the permissions and access on the whole object hierarchy.  As we are not going to evaluate the permissions on each and every field, the approach is to cache the context specific retrieved entitlements so that there would be zero latency in the requests made by the application.
  4. The Entitlement Engine also provides a way for certain role to delegate some of his permissions to the roles under him. This gives the flexibility for the manager to maintain business continuity even in his absence. All the activities done would be audited and tracked for future reference.
  5. Entitlement engine provides services, which can be accessed by applications, which are built in any language and get the permissions. The engine provides RESTful web-services which could be integrated for better performance.
  6. Entitlement engine provides various reports on the profiles created, usage of applications, delegations made by profiles, changes in object permissions etc. This helps the management as well as the compliance team to make sure the access to the applications and features within the applications is based on the standards defined.
  7. Entitlement engine is built upon a framework which could be extended as needed as well can integrate with any existing systems as needed. 
  8. Entitlement engine has inbuilt caching mechanism’s implemented which bring down the response times for requests from applications. The architecture is built-in to consider high availability and fail-over by proper load balancing at various layers.
5. Entitlement Engine Features
Entitlements System has the following features:

System Management
In our entitlements system we treat every enterprise resource as a system. We can define different kinds of systems like Web Application, Web Service, Database, FTP server, any network device etc., by using our pre-configured metadata about different system types.
Below are the high level features 
  • Defining  System
  • Associating attributes defined in the System type to System
  • Associating Object Hierarchy to a system
  • Defining  allowed access types to the objects in the object hierarchy
Object Hierarchy Management
This feature allows defining and associating an object hierarchy to a system. Every system has a specific object hierarchy. We have pre-configured object types which can be associated to these objects or we can extend the metadata as needed. At the object level we can define what different kind of access can be allowed on this object.

Configuration Management
Every feature (System, Profile, Access and Object) in the entitlement system is based on a type. The configuration management helps in defining the metadata for the features provided by the entitlement system. 

Below are some of the configurations
  • System Type
  • Object Type
  • Access Type
  • Identity Provider
Profile Management
Profile defines what level of access a user has with a particular system. Profiles are created for specific systems and then associated with user(s), role.
High level features 
  • Profile Creation - A particular profile can be created for the System Object hierarchy. 
  • Profile  Delegation - This feature allows a manager to delegate some or all of his objects from a profile to a sub-ordinate. The manager can also specify the duration and can change the level of access for that delegated profile.
  • Profile Configuration - Profile can be configured against associated system object hierarchy. Extended data constraints can be configured for objects in object hierarchy.
Integration with Third -Party  Identity Providers
The entitlements system is powered to integrate with any third party Identity provider. The system has also the capability to define identities internally or map/sync the identities from different third party vendors. The internal structure mapped based on Users,
  •  Users - We have regular sync job which update the users in our entitlements system with the third party provider.
User Profile Management
Entitlement Engine facilitates mapping profiles to users.

Entitlement  Services
These are the different services provider by our entitlements engine to the enterprise applications (for getting the entitlements for the logged in user). We have pre-defined services which can be accessed by REST Protocol.

Audit Control
Every action done on the entitlements system is tracked. We have a view and edit history on every feature (System, Profile, Object Hierarchy etc.) of the system. 

Reports
There are several reports provided by our entitlement system which gives a high level as well as detailed information to the business users.

Below are some of the canned reports
  • Users based on selected  profiles
  • Delegated profiles
  • Profiles based on system
  • Profiles mapped to users, organization units and cost center
  • System level profile usages by user
Back-End jobs
There are back-end scheduled job to sync-up the identity info from the configured identity provider.

Note: The below are handled by a third party tools.
  1. Identity Management: The third party tool will be taking care creation, updating and managing the User identities.
  2. Authentication: The third party tool will be taking care of the authenticating the user based on the identity provider and provide a federated access or SSO implementation for the enterprise applications.
6. Technologies 

Operating System
Windows XP/Windows 7(Development), RedHat Enterprise Linux 5(Production)
IDE/Language
Eclipse / Java
Secondary Cache/Persistence Frameworks
Memcache
Frameworks
PrimeFaces 3.5, Spring 3.0
UI/Ajax Frameworks/HTML/CSS
JQuery
Database            
PostgreSQL
Browser Compatibility Test
IE 6.0+, FireFox, Chrome, Safari
Web Server/Application Server
JBoss 7.1.1

7. Advantages on Building on Open Standards

Our solution is built on open standards so that it’s easy to integrate with any IAM tool and heterogeneous applications based on different environments. Below are some of the details.
  1. We have built-in adapters which could be used by the enterprise applications which do understand the authentication tokens which are SAML based.
  2. We have built-in connectors which help the applications to communicate over SOAP, REST or Thrift. This gives the applications the flexibility to choose the protocol they need.
  3. We have built in service providers for giving out the defined application level permissions in the form of XML, JSON or Compact mode.
  4. We have built-in adapters when configured can talk to any XACML based PDP. This adapter does understand the XACML response and can provide the response back to the application based on their needs (SOAP/JSON/Compact etc.).
  5. Our solution is built on open source frameworks (Spring, Primefaces) and deployed over JBOSS as the application server. 

To learn more about Prolifics, visit www.prolifics.com.

Thursday, June 5, 2014

Measuring ROI from Business Process Management Initiatives

Executive Summary:
Business process management projects are riddled with implementation challenges. Differing expectations on process performance and lack of communication between IT and business users, delays in “operationalizing” BPM applications, process design that changes the "we have always done it this way" style of operations are some of the main contributors to these challenges. The delays in implementing new applications make it difficult to correctly measure the return on investment (ROI). This article explores some of the tangible ways in which the ROI can be correctly measured after a BPM project implementation.

Why the delay?
During the requirements phase, the business process (operations) users might have different expectations from an automated process. In most of the scenarios, the operations users tend to be influenced by the legacy systems while describing the process requirements. After the IT team develops the automated process application eliminating inefficiencies, differences crop up. This problem is compounded if the process "playback" methodology is not adopted while developing the application. The tug of war between the IT and operations leads to delays in implementing the process application, hence delaying the ROI and increasing the project costs. A lack of executive sponsorship for championing the process improvement cause among the business users is also one of the chief reasons that delays adoption of the new application. A process performance benchmark set before an IT initiative is undertaken would solve all these problems.

How to set process benchmarks?
The first step is to achieve process maturity within the organization. This need not wait till a BPM initiative(s) is identified. A comprehensive process inventory capturing the existing processes within an organization gives a good view to plan the process benchmarking. If a Big Bang approach to process inventory is not possible, the process assessment can be limited to a business unit or function. It can be started with value map or capability map or organization chart.  A combination of top down and bottom up approach to create process awareness should be adopted. Business users should be enlisted to help identify the critical process.

Steps to set process benchmarks:

Identify objectives for each area for next few years:

  • What are the processes that currently exist in these areas? Who owns them?
  • Are these processes helping to achieve the objective?
  • What is the desired state for the processes to achieve the objectives?
  • How do you measure if processes are helping you achieve the objectives?
  • Which business units are involved in the business process?
  • What are the key areas impacted by the business process?

Define KPI & SLAs:
  • Is the goal to reduce time or achieve agility or improve visibility? or all of them?
  • What should be the end to end cycle time of the business process?
  • Do the current SLA's apply to the new objectives?
  • How do my SLA's compare against my competitors (if possible to compare)?
  • Do the current processes create enough and relevant data for me to measure my business performance? What should I do to create digital data?
Set Employee Measures:
  • What is the best utilization of my full time employees (FTE's) within these units? 
  • Are the processes too much people dependent?
  • What will happen to my operations if my employees quit?
  • What type of employee skill sets do I need in the next few years? Do these processes support working with such skill sets?
Set Revenue & Cost Measures:
  • For revenue impacting processes, how do these processes measure up against the revenue targets for next quarter/month/year?
  • For customer impacting process, what do the customer satisfaction ratings say? What are the desired ratings?
  • What are my cost savings targets for next quarter/month/year?
These are some of the ideas for setting up process benchmarks. When these benchmarks are set up proactively, it keeps the measures independent of any BPM/Automation initiatives and provides an objective measure for ROI on the BPM/Automation initiatives.

Setting ROI Measures:
Once the process benchmarking has been set, it is a good time to identify BPM and other digital/automation initiatives. Select a critical process improvement initiative that can be implemented in around 4 months to show value and achieve a quick win. In order to regularly measure the progress of the process initiative, the benchmark targets should be broken down into multiple, phased targets. 
E.g.: 
  • After iteration one, the process cycle time should be reduced by X hours
  • After iteration two, the FTE utilization should only be 60% of process work
  • After iteration one, work assignment should be dynamic and flexible. 
The ROI should be mapped against these benchmarks. The benchmarks should be assigned a $$ value, the expected time to achieve the benchmark taken into account and discount factor assigned to this $$ value. The increase in revenue $$ and cost savings $$ should be added to create the complete ROI figures.

Measuring ROI:
True continuous improvement is achieved in an iterative manner:

ROI Measurement Cycle
Compare the process/rules/IT performance against benchmarks regularly
Implement Data Analytics tools to measure business data impacted by business processes
Compare the process KPIs with captured data
Perform a regression, is there a correlation between process KPIs and my revenue/cost drivers?
What KPI measures do you have to meet/improve to achieve the desired objective?
Identify process areas that can improve these KPIs
Get back to process improvement

The ROI should be measured against a minimum of 3 process cycles (around 2 years):

ROI Measurement in Phases
Project investment (-$$) + (Iteration 1 ROI/ (1+IRR) + (Iteration 2 ROI/ (1+ IRR)2) + (Iteration 3 ROI/ (1+ IRR3)3) = 0

Closing Comments:
Setting process benchmarks builds accountability into BPM projects. Setting the correct expectations and making budget decisions based on a ROI calendar helps in reducing the delays in implementing BPM projects and to add true value to the business.

To learn more about Prolifics' BPM solutions, visit our website.


N.R. Vijay is a Solution Architect in the Business Process Management division of Prolifics. He has over 10 years of consulting experience across domains such as Retail, Healthcare and Banking. Specializing in technology, management concepts and enterprise strategy, he is focused on change management and process improvement initiatives. He co-authored a whitepaper titled "Improving Customer Loyalty through Business Process Optimization and Advanced Business Analytics"



ODM Integration with SPSS Predictive Analysis Suite - Part 1: PMML Import

There are a few ways of integrating the SPSS Predictive Analytics Suite with ODM. To get started, it is required to install the SupportPac (LB02).

The SupportPac provides two features that support the usage of business rules and Predictive Analytics together:
  • Part 1: Import a decision tree model via PMML (Predictive Model Markup Language) and generate an Operational Decision Management decision tree at design time (discussed in this article)
  • Part 2: Reference predictive scores within business rules and obtain those scores at runtime from the SPSS Scoring Service (not discussed in this article)
In this article I am going to describe step-by-step the instructions to utilize the ODM capabilities of PMML Import of Decision Tree models. Part 2 will be a separate article describing ODM-SPSS Scoring Service approach. Both approaches require an installation of IBM WebSphere Operational Decision Management Integration with the SPSS Predictive Analytics Suite SupportPac.
To install the IBM WebSphere Operational Decision Management Integration with the SPSS Predictive Analytics Suite SupportPac, you must:
  1. Unzip the SupportPac deliverable in the WebSphere Operational Decision Management installation directory.
  2. Install the predictive analytics features from Rule Designer.
PMML is the leading standard for statistical and data mining models. It uses XML to represent mining models, so that models can be shared. In other words, using PMML, models can be developed on one system using one application and deployed on another system using a different application. Models can be created and PMML can be generated and exported using the SPSS Modeler.

The PMML import approach works for Decision Tree models. Decision Tree models are produced by data mining algorithms (such as CHAID, C&RT, ID3, C4.5/C5.0) that identify various ways of splitting a dataset into branch-like segments, forming an inverted tree that starts with the root node at the top of the tree. Decision Tree models are used frequently in the data mining community for classification and prediction as they are easy to understand, easy to use, support both quantitative and qualitative measurements, and are very robust. Data mining workbenches, like the SPSS Modeler, provide rich toolsets for creating and validating Decision Tree models.

The PMML import feature focuses on the Decision Tree model. After your Decision Tree model is exported from a modeling tool to a PMML file, you can import it into a decision tree.

The primary difference between a Decision Tree model, as used in the data mining community, and a decision tree is that the decision tree has actions attached to the leaf nodes while the Decision Tree model usually has some sort of predicted variable or classification attribute specified for each node. In other words, a Decision Tree model can identify the business rules for classifying and predicting a specific variable, whereas the decision tree can actually execute those business rules along with the appropriate actions at run time.

1. Importing PMML you have a choice to either generate the BOM elements the model is using or to map existing BOM elements to the fields in the model. 


2. The PMML import creates a Decision Tree and the BOM elements used in the model. You may get B2X errors and warnings until you create the corresponding XOM class.

3. At this point you can treat the Decision Tree as other Decision Trees (if any) or rule artifacts created in Rule Designer. You are allowed to modify/edit the DT. Best practices around ruleflow orchestration suggest that each decision tree should be contained within its own rule task. 

Note that an imported decision tree currently has no life-cycle link to the PMML file. Consequently, if you change the PMML model itself, you will have to repeat the import/modification process.


Artur Sahakyan is an Associate Consultant at Prolifics specializing in IBM WebSphere Operational Decision Management (v5.xx - v8.xx). Artur has a strong background in mathematics and probability/statistics. He also has profound knowledge of IBM Business Process Manager, IBM Integration Bus (IIB v9), IBM WebSphere MQ (v7), IBM SPSS Modeler, IBM SPSS Statistics, Java, C++, C.