Wednesday, August 27, 2014

Achieving Regulatory Compliance with Decision Management

The 2008 financial crisis affected each of us in some manner. In particular, financial institutions and banks felt most of the heat. There were several repercussions of this crisis in the form of increased regulations and various legislation in an effort to curtail such an occurrence in the future. The aim of such regulations is to maintain confidence in the financial system, to increase financial stability, to protect consumers at some level and to reduce financial irregularities.

Since financial institutions now live in a climate of increased compliance and regulation, there has been an increase of consulting firms – both technical and advisory – in providing specialized services to help these institutions implement regulatory compliance so that these institutions can focus on their business while complying with these ever changing regulations.

It would be futile to jump into a solution of how this can be achieved without understanding what regulatory compliance means. Compliance means conforming to a rule which can be a policy, standard or law. Regulatory Compliance describes the goal that companies aspire to achieve in order to comply with relevant laws and regulations.

Where do business rules fit in the picture?
Business rules are by definition a statement that describes the policies or constraints of an organization. Since compliance requires conforming to a policy in general, business rules fit the perfect picture as a placeholder of such policies. This is for various reasons. First, rules are repeatable and tractable to automation. Second, rules are transparent and easily traceable. This makes for increased visibility of the policies which are to be complied with. Business rules implemented with IBM’s Operational Decision Management software can be exported to a word or excel document, and even be emailed to an organization’s legal department in the format they are written. Third, rules can be changed easily with zero down time to make the change to production. This helps organizations cope with an ever-changing regulatory environment and allow them to focus on its business rather than inviting preseason resources keeping up with a changing regulatory environment.

How can regulatory compliance be achieved by Operational Decision Management (ODM)?
The best way to describe ODM’s capabilities for regulatory compliance would be to take existing compliance policies that firms have to constantly deal with, and propose an implementation using ODM. We take one of the most challenging regulations that was recently (2010) enacted by the 111th US Congress – it is the Foreign Account Tax Compliance Act or more popularly known as FATCA. The act aims to tackle tax evasion by US Citizens to tax havens or strong data protection countries like Switzerland. Foreign financial institutions like banks, insurance firms and fund houses are affected by FATCA and need to comply with FATCA regulations. Individuals with US nationality, US address or phone number and corporations with substantial US ownership are affected by this legislation. Complying with FATCA became so complex and necessary at the same time that IBM has offered a specialized FATCA solution in their offerings.

One of the challenges FATCA brings is the amount of information it requires an organization to process which especially creates a hassle to the organization’s technology platform. There are three different impacts to the technology platform with FATCA – customer classification, transaction monitoring and finally IRS reporting.

In our business case example, let us study customer classification. In order to comply with FATCA, financial organizations have to collect a W-9 form from all account holders who are US Persons. This is clearly business logic which can take an ugly and complex turn when implemented in application code. The solution: WebSphere Operational Decision Management (ODM). The above business logic can be copied word to word and represented in the form of a business rule. It can be created in what is called a rule designer. This is how the same business logic looks like when written in ODM as a business rule:

The above business rule can be exported as-is to what is called the decision center which is the special portal that business users have access to with the ODM suite of products. Decision Center gives immense visibility to the rules across an organization. Major stakeholders can log in to this portal and view the contents of critical decision tables or business rules. Returning to our scenario above, the same FATCA rule when deployed to the decision center, can be edited by business users by click of a button. Clicking on the “Edit” link below, the rule can be easily modified by a non-technical user:





Any changes to these business rules in general can be directly deployed to production environment, through the decision center portal. Obviously, there are various recommended governance strategies that provide checks and balances along with regression testing, so that incorrect information is not pushed to production servers. Nevertheless, the capability to change an existing policy (or a decision table) is available with ODM.

Conclusion
Regulations are here to stay and the sooner organizations adapt to implement compliance with these regulations, the better they will become for their competition. In our example for FATCA we just saw how ODM can be leveraged to implement changes at a lightning pace. There is much more that can be achieved with ODM, this just gives a small glimpse of what your organization can look forward to when selecting ODM as a solution to meet your organization’s compliance.



Akshat Srivastava is a Senior ODM Consultant at Prolifics with about 7 years of experience in the IT industry having worked in insurance, banking, retail and public sector companies. He is experienced in all aspects of the development life cycle, including bottom-up estimates, analysis, design, development, testing, release management, and bug-fixing. He has created rule based solutions at various clients, authored rule repositories and best practice documents while focusing on WebSphere Operational Decision Management as the implementation environment. He has also created BPM applications for client onboarding for leading financial institutions. Akshat holds a bachelor’s degree in computer science from California State University.



Tuesday, August 26, 2014

Testing Philosophy in ODM: Feasibility of Complete Rule Testing in Decision Validation Services (DVS)

Software testing is an important step in the software development life cycle. IBM Operational decision Management (ODM) is not an exception. Testing in ODM is done through Decision Validation Services (DVS). It automatically generates an excel sheet with specified input fields from the Execution Object Model (XOM).  To run a test you need to fill out the excel sheet with test cases and expected outputs. Each row would represent a test case. In this article I would like to discuss the feasibility of running a complete (or close to complete) test in DVS based on the number of fields and the complexity of the decision (rules).

Let us first consider the feasibility of complete test regardless of the technology/software choice. For simplicity let us consider rules around only three fields (Field1, Field2, Field3), which carry binary values. The maximum number of test cases needed to run a complete test for the ruleset is 2^3=8.

#
Field1
Field2
Field3
Ruleset Output
1
T
T
T

2
T
T
F

3
T
F
T

4
T
F
F

5
F
T
T

6
F
T
F

7
F
F
T

8
F
F
F


This is a simple example (n=3), but it can help us visualize and understand how to handle the test cases with a lot more fields/elements. Note:  2^3=8 is also the maximum number of unambiguous rules that can be implemented with 3 fields. This can also be visualized as a binary tree. The height represents the number of fields and the leaves of the tree represent the rules. E.g. the left most leaf R1 corresponds to the rule#1 in the above table (F1=true and F2=true and F3=true).


In general for N fields, each of which can take K values (group of values), there can be at most K^N unambiguous rules implemented. In the above example N=3 and K=2 (binary fields). We can generalize it even further. There can be fields with different number of accepted values (group of values).  If there are n fields, each of which can take k values and there are m fields each of which can take p values, there can be at most k^n* p^m unambiguous rules implemented. 

Another philosophical question we should ask is how much testing is sufficient. E.g. if we have implemented R1,R2,R3, do we have to implement test cases for the tracks R4, R5, R6, R7,R8 , i.e. do we have to prove that unimplemented rules did not fire(unintentionally)?!?!  The answer is - it depends!! We may have to do 1 or 2 test cases for unimplemented rules, depending on the complexity of the rules. Example: loan approval application that has a rule: “if the age of the applicant is greater than 40 and the credit score is “good” then approve”. If we were to write test cases for that single rule, we might have to do the following:

#
Age is greater than 40
Credit score is “good”
Expected output
1
T
T
Approve
2
T
F
The default approval status
Theoretically we did not “have to” do the second test case, but it is good to make sure the approval is set by the implemented rules and not by some bug that sets it to  “approve”  regardless of rule conditions. 

What if there were 100 fields instead and let us keep them binary for simplicity (would not change the problem if they could accept 3, 4, 5 ... n values). The max number of rules constructed and the max number of test cases need to do a complete case is 2^100. 

#
Field1
Field2
Field100
Ruleset Output
1
T
T

T

2
T
T

F



F
F

F


Often times we implement rules with more than 100 XOM elements and to run a complete test we would need 2^100~1.27*〖10〗^30test cases. Theoretically this is an unfeasible problem if we consider the worst case scenario. 

The number of fields is relevant only in theoretical discussion, considering the worst cases (2^n  , where n is the number of fields). In practice though the number of test cases is needed for DVS is equal or comparable to the number of rules.  That means that if it was feasible to create N number of rules, then it would be feasible to implement sufficient test cases for complete testing in DVS. 

To learn more about Prolifics' ODM solutions, visit our website or contact solutions@prolifics.com.


Artur Sahakyan is an Associate Consultant at Prolifics specializing in IBM WebSphere Operational Decision Management (v5.xx - v8.xx). Artur has a strong background in mathematics and probability/statistics. He also has profound knowledge of IBM Business Process Manager, IBM Integration Bus (IIB v9), IBM WebSphere MQ (v7), IBM SPSS Modeler, IBM SPSS Statistics, Java, C++, C. 


Tuesday, July 29, 2014

Customers Purchase Benefits, Not Products

Business organizations have core competency in certain areas and are more likely to have strong capabilities in those areas. Firms are also likely to look for outside consultants in areas where they do not have enough home grown talent and more likely to outsource capabilities such as IT in case of an internal gap. This article is a quick overview of some of the decision challenges faced by the firms while trying to decide what capabilities to outsource.

Businesses are facing a challenging landscape. The customer is connected and empowered. The customer can compare healthcare insurance plans at the click of a button. He or she can search for the best price for consumer goods online and order it online to have it delivered the same day. The customer is purely shopping for benefits while the firms are selling products/services to provide those benefits. Consumer needs have changed, since they are exposed to products from across the globe. A customer's access to information is unbound, thanks to internet. This has resulted in firms re-evaluating their business model to see if they are still serving the evolving customer needs. An insurance company offering high deductible plans with a wide coverage might no longer be relevant in a new target market. The new market demand, due to changing demographics, might be for lower deductibles with a smaller specialist network. An historically well-performing video game with MP3 song playing capabilities might no longer be relevant because of the availability of music on smartphones.

Under such scenarios, the firm has to re-think its business model and make key decisions about how best to leverage its existing capabilities to meet the new market demands. After the firm has decided to revamp its product/service to best meet the new consumer needs, it has to review its existing capabilities - human resource capabilities, IT capabilities, supplier capabilities etc. Because of the competitive landscape, the time to market for modified products/services has to be very short. Productivity of its employees becomes critical in order to achieve high speed to market. IT systems play a key role in increasing the productivity of a firm's employees. During the process of selecting a suitable IT product, the firms have to select a product that best compliments its employees’ skills. Not every "out of the shelf" product will meet the needs of the firms. Especially since the changing business needs require updated employee skills, it is rare for "off-the-shelf" products to meet the new needs of the firms exactly. The IT products needs to be customized to the needs of the firms. Under such a scenario, it is critical for the firms to engage with IT product & consulting companies that have worked with firms from various industries and effectively understand the changing needs of its clients. Firms such as IBM have worked with clients from various industries for a long time and collected extensive domain knowledge. This knowledge is helping IBM and other firms in its ecosystem to quickly develop products and solutions to effectively deal with the emerging challenges. IBM products such as IBM BPM 8. X (Business Process Manager), Cognos, ODM (Operations Decision Management) carry with them an history of experience with business clients with evolving needs. This experience has resulted in development of features that are applicable and adaptable to various business needs. E.g.: The business process design and integration capabilities are as adaptable to a retailers in-house legacy mainframe systems as they are adaptable to a newly on-boarded supplier’s web services. Cognos data analytics capabilities can “talk” to an insurance firm’s legacy data source systems as well as capture real time data generated by the process.

When the time to market is so short, it might be difficult for firms to build the IT capabilities in-house. Firms are more likely to benefit from outsourcing IT development in areas where the internal employees have not yet matured at. Besides, the key upside of outsourcing an IT skill is quick development of IT applications and helping in the final goal of delivering a firms product/ service to market faster. Before making the decision to choose an IT consulting firm, the business firm has to evaluate the client portfolio of the IT consulting firm, it's nature of projects implemented historically and breadth of domain expertise. IT consulting firms that have worked with key players in a particular industry have better exposure to industry challenges. Consultants that have a shorter learning curve will achieve higher productivity while helping to design and build an suitable IT application.

Bottom Line: When businesses choose the right, flexible IT application and partner with consultants with right skills, they increase their chances of effectively catering the evolving needs of their customers.


N.R. Vijay is a Solution Architect in the Business Process Management division of Prolifics. He has over 10 years of consulting experience across domains such as Retail, Healthcare and Banking. Specializing in technology, management concepts and enterprise strategy, he is focused on change management and process improvement initiatives. He co-authored a whitepaper titled "Improving Customer Loyalty through Business Process Optimization and Advanced Business Analytics"

Thursday, July 24, 2014

4 Steps to Risk-Based Software Testing

Risk-based testing is the approach that allows us to plan our testing efforts in a way that reduces the residual level of product risk when the system is deployed. A risk-based approach helps you to understand the risks at the initial stage so that the mitigation strategy will be scheduled and implemented. Test effectiveness indicates the level of effort that is required in order to mitigate the risk of implementing a change. The higher the test effectiveness required, the more rigorous the test and evaluation activities should be. The following factors are used in determining the required test effectiveness:
  1. Impact
  2. Probability of Failure
  3. Regression
  4. Recovery
1. Determine Impact by Analyzing: (1. Min, 2. Low, 3. Medium, 4. High, 5. Severe)
Impact refers to the potential damage that the business might suffer if the intended functionality is not delivered. When assessing impact, the chances of the change negatively affecting other functions/features are not considered as that is captured under a separate attribute (Regression). The higher the impact, the more rigorous the tests should be. For example, if a simple report is being implemented and is used by only few users, then the potential damage would be minimal and therefore the impact assessment should result in a rating of ""min"".

The following checkpoints should be considered when assigning a rating for the impact factor:

  • A. Is the solution component a primary function/feature of the solution (i.e. must have vs. Nice to have)?
  • B. Is the solution component independent or other business processes dependent on it?
  • C. Does the data pertaining to transactional volume, financial and other operational considerations indicates significant utility?
  • D. Is the Solution component used by important stakeholders (Large customers, Regulatory etc.)
  • E.Is the impact to external stakeholders or Internal?
  • F.Is the impact to a single business unit vs. multiple business unit?
  • G. Number of stakeholders/users that might be impacted?
  • H. Impacts based on implementation and roll out strategy. For example, some process may not be executed immediately after implementation
  • I. How frequently is the solution component used?
  • J. Real time vs. Batch (Real time generally leads to immediate impacts and therefore more risky)

2. Determine probability of failure by analyzing: (1. Min, 2. Low, 3. Medium, 4. High, 5. Severe)
Probability of failure is an assessment of overall risk-based on various consideration like the complexity of the solution, ambiguity in requirements, complex logic etc. The following checkpoints should be considered when assigning a rating for complexity:
  • A. Technologies used (New technologies lead to higher risk)
  • B. Level of Customization (Higher customization leads to higher complexity)
  • C. Complex logic and business rules
  • D. Real time vs batch. Real time typically would be more risky as impacts are immediate
  • E. Higher defect density as perceived from prior testing engagements
  • F. Development effort (The larger the development, the more potential for failure) 
  • G. Ambiguity in requirements
  • H. Complexity of solution
  • I. Rushed schedule
  • J. Dependency on integration with external systems/partners

3. Determine Regression Impact (1. Min, 2. Low, 3. Medium, 4. High, 5. Severe)
Regression impact is the impact to the existing business processes and functions and is weighted very heavily in terms of the overall determination of the test effectiveness required. This is also the most important focus of the service validation and test group.
  • A. Changes to high risk areas
  • B. Changes to highly integrated areas (same code is shared by multiple business units/processes etc)
  • C. Lack of clear definition of the scope of changes (like support packs without clear release notes etc)
  • D. Scope of regression based on the change

4. Determine Recovery Effort/Difficulty from Potential Failure (1. Min, 2. Low, 3. Medium, 4. High, 5. Severe)
When determining the test effectiveness required to mitigate the risk of the change, the ability to recover from a potential failure needs to be considered. Even if a failure occurs, if recovery is possible quickly, then the risk is mitigated to an extent. However, if recovery is very difficult then the test effectiveness needs to be high if the solution component is critical to operations.
  • A. Existence of work around if potential failure occurs
  • B. Existence of back out procedure and ease of performing back outs
  • C. Ability and turnaround time to fix problems in case of failure
  • D. Is the failure reflected real time or is it more batch oriented
  • E. Existence of alerts or early warning indicators to aid proactive intervention

Risk Based Testing – An Example:


Prolifics specializes in providing end-to-end testing and test automation solutions that are backed by a unique service framework, proven test accelerators and one of the highest defect removal efficiency rate in the industry. Our highly skilled team of testing specialists help to enhance IT productivity and improve application quality, saving millions of dollars through early detection and scope coverage.

To learn more, visit http://www.prolifics.com/quality-assurance-testing.

Jagan Erra is a Delivery Manager in the Testing Practice at Prolifics. With over 15 years of experience, Jagan has a proven ability to identify, analyze and solve problems to increase customer satisfaction and control costs through expertise in program development and management, IT quality processes, models - ITIL, ISO, client training and cross-functional team leadership.

Wednesday, July 23, 2014

Client Showcase: Retailer Better Meets Customer Needs with Managed Services

Prolifics is committed to helping our clients create (and grow) competitive advantage in their industry. We are proud to have empowered this well known Retailer to do just that, again. After a successful e-commerce solution, Prolifics and our client teamed up again to bring ongoing managed services to further differentiate from the competition.

Our client is a high-end department store chain based in the United Kingdom. Business leaders previously began a strategic initiative to expand the Company’s e-commerce capabilities by expanding the product lines available online and improving connectivity with their distribution service to shorten delivery times in support of a higher volume of online purchases. Prolifics successfully led the implementation of a centralized Warehouse Management System (WMS) that would serve as the foundation for this project. The Prolifics team continued to provide support as needed to deliver the necessary IBM WebSphere MQ and IBM Integration Bus skills required to troubleshoot issues and make updates within the system. Over time, the Company found that their internal IT staff was spending approximately 60% of their time supporting the WMS production environment rather than deepening the capabilities of the solution. Finally, as ad hoc support service costs for the solution began to balloon during the holiday season, their busiest season for online purchases, business leaders began to explore better long-term support options with Prolifics. Valuing the deep technical expertise Prolifics has with IBM WebSphere MQ and IBM Integration Bus, our client engaged the Prolifics team to provide ongoing managed services in order to effectively maintain their WMS system while releasing internal staff to focus on other critical business issues. Prolifics recommended the implementation of SmartCloud APM and SmartCloud Control Desk to enable real-time alerts and provide a centralized ticketing system for submitting and addressing IT issues. Prolifics experts then led the implementation of these tools and developed reasonable service level agreements (SLAs) for the ongoing services that would meet the Company's needs. With managed services now in place, the Company is assured of timely, knowledgeable support for the more than 50 workflows in their WMS solution, with dedicated Prolifics staff available during regular call center hours to rapidly address issues and provide development support as needed. Further, with the new monitoring tools in place, staff can more proactively identify and address potential failures to ensure functionality and ultimately create a better user experience for online consumers.

To learn more about Prolifics Managed Services, visit: http://www.prolifics.com/managed-services

Implementing an Enterprise Services Layer - Reusable Lessons Learned

Technology implementations can be challenging. When the implementation involves several teams, multiple business units and requires a different approach than the one that is in place currently, the process can become even more demanding and at times daunting. However, if done right, the results can deliver significantly positive results for an organization.

In this article, we will be looking at the implementation process of an Enterprise Services Layer built on Service Oriented Architecture by the Prolifics Integration team and the important lessons learned during the course of the implementation. After all, reusable lessons go hand in hand with the process of developing and implementing reusable services.

This article is divided into three sections:
  • Details of the solution and the major decisions taken during project lifecycle
  • The lessons learned during the implementation process
  • Project Success
Solution Details
The Business Challenge
Due to the lack of an enterprise wide integration layer, business units were unable to easily consume the services offered by other units and share business data. This prevented the organization from providing new and value added services to consumers resulting in lost market opportunities and revenue growth.

The Business Requirements
  1. Develop a common service hosting layer that various business units can utilize to access enterprise services
  2. Build a highly available, scalable and flexible solution that provides maximum throughput and 99.999% uptime
  3. Implement an effective service governance framework
  4. Implement a platform and services monitoring and reporting solution
Project Kick-Off Approach
The Prolifics team worked with leaders and representatives from customer IT departments and business units to understand the corporate culture in general and team culture in specific. The idea was to understand the way projects typically get done at the customer and customize our approach to developing and deploying the solution by understanding the current processes and practices. 

1. Team Education
One of the first tasks undertaken was to educate the stakeholders involved and bring about a common understanding of the project goals and solution. The goal was to make sure that we clearly communicate the scope of the solution; that is what the solution is meant for and what the solution is not meant for. 

2. Create SOA principles
This project being the first to build an enterprise-wide services layer based on SOA, we established a set of SOA principles in-order to keep the project implementation aligned with the project requirements and maximize the organization’s return on investment in this project.  
  1. Consistent service definitions and implementation
  2. Consistent and secure access to corporate processes and data
  3. Standard-compliant enterprise governance
  4. Consistent and well-defined data model
  5. Cross-enterprise platform and services monitoring
3. Establish an SOA Center of Excellence
A successful implementation of a large scale SOA initiative requires bringing people from different business and technical areas together to support the implementation goals. The COE team was tasked with creating a solution blueprint to drive the overall implementation tasks and help the organization to adopt, formalize and improve the development process. 

The following were the main responsibilities of the COE team.


The Technology Foundation
To meet the service layer requirements, choosing the right technology foundation to host the solution is an important aspect; this is especially true since multiple components and systems need to integrate and function together seamlessly. It is important to consider not just the current requirements, but rather the long-terms requirements; the future growth, flexibility to add additional services, integrate with new and disparate systems, support additional throughput and at the same time maintaining the performance, scalability and availability. 

The solution comprised of the following products:





Key Factors in Choosing the Technology:


Deployment Topology:

The below diagram shows the solution deployment topology.



  1. The service consumers connect to the global Load Balancer which based on the consumer region and load balancing algorithms, connect to one the of the available downstream load balancers
  2. The service consumers can use a single endpoint that provides abstraction and high availability
  3. The DataPower devices act as a secure gateway, provide additional internal load balancing and SOA policy enforcement point
  4. IIB – A Highly available IBM Integration Bus architecture provides the service execution, message flow orchestration, advanced transformation and data enrichment platform
  5. Fail-over infrastructure – Multi-node, inter-frame and cross site infrastructure provide advanced fail-over mechanisms 
  6. WSRR – WebSphere Service Registry and Repository provides service life-cycle governance features and acts as the service policy definition point
Lessons Learned
Educate the Stakeholders
Ensure that the stakeholders understand SOA principles, project scope and project goals.  This step needs to done right at the start of the project; saves significant amount of time and money during the overall project roll-out and helps keep the project on schedule.

Security 
Plan for security right at the start of the project; this includes determining the protocols, encryption technologies, authentication and authorization processes etc. If the customer does not have an enterprise wide common security standards, it is possible that individual teams use different and at time incompatible security practices and processes that can lead to substantial challenges during the integration process. 

The Center of Excellence needs to provide sample code, documentation and implementation approaches to assist the different teams in pursuing a common security standard.

Make sure service consumers understand the implementation details
Ensure that the service consumers understand the details of connecting to the service layer and consuming the available services. Most often, service consumers face challenges when it comes to connectivity details, security implementation details etc. To make the process easier, have test beds made available for the consumers. Additionally, a functioning and well documented integration environment will further help in this process. 

Follow industry and internal standards
As part of the Center of Excellence enterprise architectural decisions, make sure that the stakeholders agree on the industry as well as internal standards that need to be followed. With limited resources and budget, common standards and practices will help in easier and simpler overall implementation process. This will additionally help in ensuring that the various teams can talk a common language and provide common support and maintenance services. 

Avoid Service Layer Complexity
The service layer complexity needs to be avoided. Additionally, only the necessary operations that directly enable integration connectivity and facilitate associated supporting activities should be deployed on the Service layer. It is important not to task the service layer with an application or service provider specific activity or functionality. The service layer needs to be lightweight, scalable and highly available. 

Iterative Deliverables
When an organization is just getting started with an SOA based service layer approach, identify the services that can be easily and readily hosted on the service layer. By taking a long term and iterative deliverables approach, the return of investment can be maximized and risks can be reduced. 

Performance Testing
Most often, it is difficult to understand the performance behavior of various applications and systems in an Enterprise Service solution. With services hosted by multiple business units and often running on different technologies, performance bottlenecks can be a recurring problem. 

In order to properly understand the performance behavior of the various systems and the overall solution, it is important to do performance testing early and often. Surprises such as a bad network route or a wrongly configured server can be avoided if performance testing is incorporated as part of the development life-cycle and performance engineering best practices are followed. 

Monitoring and Reporting
In a typical SOA based environment, tracking down performance and availability issues can be extremely challenging due to the loosely-coupled design. Additionally, with services hosted by multiple providers, keeping tracking of SLA and performance counters can prove to be even more difficult. 

In order to deal with these challenges, it is important to have an end-to-end monitoring solution. The solution should ideally monitor not just the operating infrastructure (performance, availability etc.) but also the business services (SLAs, service availability etc.). It is also important to have a solution that can provide useful and easy-to-understand reports that will assist in planning for capacity planning and performance analysis.  

Organizational Readiness
  • IT departments and organizational units typically find that the process of introducing new initiatives that change the way projects are typically done is difficult. To ensure project success and maximize IT investment returns, strong leadership support for these initiatives is required. It is essential that the leadership team communicate the significance of the project and establish a strong project management team.
Importance of Service Governance 
  • In order to provide consistent and standardized development, delivery and implementation of services and reduce risks, a strong governance model needs to be established as part of the project life-cycle. In a large scale implementation involving numerous teams, a strong service governance model that can handle the entire service life-cycle can ensure that the implementation process aligns perfectly with the business requirements and organizational goals.


Project Success
The Enterprise Server Layer currently handles over 6 million transactions per day. With a strong architectural foundation in place, the platform can seamlessly support additional throughput by horizontal or vertical scaling in a highly available environment. Additionally, the solution delivered end-to-end governance, monitoring and performance reporting. 

The solution proves that with proper architecture, design principles and team partnerships, organizations can achieve excellent return of investments and realize project success that can enable new market opportunities and business growth.


Authors:
The team of integration experts at Prolifics enable customers to maximize the return on their IT investments by providing end-to-end integration, governance and monitoring capabilities. To learn more about Prolifics' integration solutions, visit: http://www.prolifics.com/soa-universal-connectivity

What We Can Expect from the IBM-Apple Alliance

The IBM-Apple deal was one of the most talked about news stories in the world of IT this week. As Premier IBM Business Partners, we at Prolifics are thrilled to hear how these two innovative enterprises will continue to shape the future of mobile.

To learn more about the alliance, we decided to sit down with two experts who can share unique insight on what this means for IBM, today’s businesses and consumers around the word. At Prolifics’ Sea Change Summit this week in Montauk, New York, we met up with Ken Parmelee, Technology Business Development Executive, MobileFirst at IBM. We also sat down with Maya Abinakad, Marketing Communications Manager at Prolifics, to talk about what Prolifics clients


Ken Parmelee, Technology Business Development Executive, MobileFirst at IBM
Why do you think this deal is happening now? 
Apple and IBM have an opportunity to shape the mobile enterprise market uniquely. The enterprise market has for years asked for stronger management, security and more scalable solutions. This partnership provides enterprise grade iOS solutions.

In your opinion, just how big is this announcement, when looking at IBM’s 100+ year history?
IBM has had many historic announcements. This would rank high due to the explosion of the cloud and mobile markets and the lack of enterprise solutions in those spaces. Apple's strong consumer focus will lend to IBM design to create even better user experience for these. Both companies are at their best with their focus on the combined capabilities

What are your predictions in terms of what customers can expect as a result of this deal?
Customers can expect:
  • World-class enterprise purchase and support for their devices
  • More secure solutions and services
  • Developer tools and services that speed development while creating beautiful high fidelity iOS experiences that scale. 
Why is data such a critical component of this deal?
The future of computing is all about predictive analysis and intelligent automation that drive personalized, contextual interactions and user experiences. Leveraging the capabilities of IBM MobileFirst for iOS developers will be able to create applications that bring this to reality on the mobile devices Apple creates that are renowned for their user experience.

Connect with Ken Parmelee today!


Maya Abinakad, Marketing Communications Manager at Prolifics
What do you think the Apple-IBM alliance says about technology and business today?
I believe the message this is sending to the market is significant. Mobile has very much been a disruptive technology and has transformed both our personal and professional lives. Until now, we have seen a clear separation between these worlds when it comes to mobile devices. The IBM-Apple alliance will bring a greater overlap, as IBM and Apple combine proven enterprise solutions together with a best-in-class user experience. I hope that this will lessen the divide between business leaders and creative leaders.

What does this deal mean for Prolifics’ clients?
It means they can have the best of both worlds. These solutions will include the market-leading strengths of IBM and Apple – bringing a completely new level of value for businesses. Our clients are forward-thinking, and this will enable them to leverage the full potential of mobility, all while benefiting from the excellent design and user experience of Apple mobile devices. We are excited to help them get to the next level in mobility.

In what ways do you think Prolifics will be able to contribute to this new alliance?
As a Premier IBM Business Partner, Prolifics has always been committed to leveraging IBM technologies to create customized IT solutions that create competitive advantage. Now, we will be able to offer powerful mobile enterprise solutions on iOS, empowering our clients in ways that were never before possible. As our clients continue to lead their industries in innovation, they will rely more heavily on Prolifics’ technical excellence and industry focus.

Connect with Maya Abinakad today!

Congratulations to IBM on this exciting initiative! We look forward to contributing to your continued innovation!


To learn more about Prolifics, visit www.prolifics.com.

To learn more about the IBM-Apple alliance, visit: http://www.ibm.com/mobilefirst/us/en/