Monday, December 23, 2013

IBM Connect Session Preview: Prolifics & Tufts Health - Creating a World Class Future User Experience Platform

IBM Connect 2014 is just around the corner! This is my first visit to the conference, and my first time as a session co-speaker. Along with one of our great customers, Tufts Health, I will be presenting about the replatforming of their digital user experiences using IBM Customer Experience Suite and IBM Employee Experience Suite products and capabilities.

What you might like to hear in this session:
  • Business and technology drivers for having an online experience platform
  • Selection criteria
  • Products in a user experience suite
  • Capabilities in a user experience platform
  • A real world implementation success story
I’m excited to show off the solution our teams created! To learn more about this IBM Connect session, click here.

To learn more about Prolifics' presence at IBM Connect, visit our conference page.

If you are planning to attend IBM Connect, I hope to see you at my session! Interested in connecting before the conference? Contact me today to schedule a meeting or learn more about my session with Tufts Health.
Session Details:
ECE203: Tufts Health: Creating a World-Class Future User Experience Platform
Date: Wednesday, January 29
Time: 3:00pm-4:00pm
Speakers: William Pappalardo, Tufts Health Plan; Tim Reilly, Prolifics
Abstract: In this session, you will learn why Tufts Health Plan chose IBM's Customer Experience Suite and Employee Experience Suite to replace their existing portal portfolio. Tufts Health Plan wanted to ensure they had a world class future looking user experience platform in place before modernizing and investing in new capabilities for their users. The session will detail how they subsequently planned and delivered an effective online experience using portal, web content management, forms, social solutions and more. The team will discuss their business priorities, technology selection, lessons learned, and what's up next in their roadmap.

Tim Reilly is Practice Director with Prolifics. He has led the implementation of many global projects using IBM WebSphere Portal and has extensive background in design and development of enterprise portals. He specializes in providing Enterprise Java and Portal solutions leveraging WebSphere Portal, Content Management, Tivoli and third party integration. He has over 10 years of experience with WebSphere Portal and is a former Apache Software Foundation committer.

Wednesday, December 4, 2013

Are you Prepared for Gen Y?

The first question many of you will have in their mind is: "What does Gen Y mean?"

Wikipedia says “The Generation Y, also known as Millennial Generation, is the demographic cohort following Generation X. There are no precise dates when the generation starts and ends. Commentators use beginning birth years from the early 1980s to the early 2000s.”

Most importantly below are a few notable characteristics of Generation (Gen) Y in comparison to other generations in the workplace :
  1. Gen Y demands flexible work hours – a story in Time magazine says “Gen Y Seeks Work-Life Balance Above All Else
  2. Gen Y is focused on extrinsic values such as image, money etc.
  3. Gen Y is highly tech-dependent – “Millennials Tech-Dependent, But Not Necessarily Tech-Savvy
The Gen Y itself is one of the biggest driver for many new initiatives in an organizations such as Bring Your Own Device (BYOD), social media (Facebook, Twitter, etc) and many more. Since Gen Y is the next greatest generation, most organizations are undergoing paradigm shift and that’s why perimeter based security is evolving and moving to virtual perimeters.

Identity is the new virtual perimeter, and traditional Identity and Access Management solutions are not sufficient to enhance visibility of environment, provide actionable insight and risk-based scoring in context to identity. The Prolifics’ Identity Intelligence solution with IBM QRadar delivers a comprehensive IAM.next solution with actionable insights and 360 degree view of IT environment. Our solution is uniquely positioned to extend the integration of traditional Security Information and Event Management (SIEM) to identity systems to more intelligently manage security risks from user activity in real time and provide continuous control of insider threats through identity analytics.

Want to learn more about our Identity Intelligence solution? Read more here.



Nilesh Patel is a Security Solution Architect with Prolifics and he is a preeminent identity and access management and security intelligence expert. Nilesh is a certified solution advisor for IBM security and compliance management solutions and he is an accredited IBM Redbooks® Master Author. Prior to Prolifics, Nilesh worked with IBM as Senior Identity and Access Management and Security Intelligence Professional.

Friday, November 15, 2013

Prolifics Identity Intelligence for Today's Security Challenges

The term security is really a wide term and it can mean different things to different people across an organization. One of the best frameworks used for enterprise security is the IBM Security framework, an architectural framework that deals with security and security issues of any organization. The IBM security framework simply break downs organizations security into different pillars such as People, Data, Application and Infrastructure.

In today’s business environment, security controls are implemented using different products in each of these pillars. Let’s take an example of the "People" pillar. The People pillar of the IBM security framework offers security in context of identities and accounts for an organization; and this is fulfilled using Identity and Access Management (IAM) solutions, which can be implemented using Identity Manager, Access Manager and Enterprise Single Sign-On products. The fact is - all these products are good at what they supposed to do individually, but these need to be constantly review to measure their effectiveness. To fulfill this gap, on security intelligence side, solutions like Security Information and Event Management (SIEM) can connect to each and every component of the environment and because of this, SIEM can better understand the security posture of the complete environment and the solutions are as good as a Big Boss of any IT environment.

Prolifics' Identity Intelligence solution is a tight integration of IAM and SIEM domain. When I say a tight integration, that does not simply mean collecting events from IAM environments. Identity Intelligence is about adding value to existing controls for intelligent security controls such as tracking and reporting misuse of access rights.

Prolifics has been a leader in leveraging IBM Security Software to deliver comprehensive Identity and Access Management solutions, and now Prolifics' new innovative Identity Intelligence solution using the IBM QRadar product family helps organizations more intelligently manage risks.

The Prolifics security team will be showcasing our Identity Intelligence solution at Gartner Identity & Access Management (IAM) Summit 2013. If you are attending the conference, be sure to visit us at booth # 411.

Interested in learning more about Prolifics' Identity Intelligence and other security solutions?
Visit Prolifics' website at: www.prolifics.com
Connect with Nilesh Patel: npatel@prolifics.com
Learn more about Prolifics at Gartner IAM Summit



Nilesh Patel is a Security Solution Architect with Prolifics and he is a preeminent identity and access management and security intelligence expert. Nilesh is a certified solution advisor for IBM security and compliance management solutions and he is an accredited IBM Redbooks® Master Author. Prior to Prolifics, Nilesh worked with IBM as Senior Identity and Access Management and Security Intelligence Professional.

Wednesday, October 23, 2013

Customer Showcase: Prolifics Delivers Innovative IBM PureApplication Systems Solution

Recently, Prolifics delivered an innovative solution that leverages IBM PureApplication Systems, a platform system launched in 2012 that is designed and tuned specifically for transactional Web and database applications. Working closely with IBM, Prolifics delivered one of the first successful IBM PureApplication Systems solutions to date to our customer, a financial services company. This solution serves as a model for future implementations of the technology and is a testament to how companies can achieve real business value. The story is highlighted below:

This financial services company offers lending, leasing, and other financing to businesses in some 35 countries. The Company provides capital for a variety of assets, including industrial facilities and equipment, real estate, and corporate aircraft and vehicles. It also develops private-label financing programs, provides revolving credit, and makes equity investments in various industries. As part of a multinational corporation, the Company was beginning to outgrow its IT environment built on five different application server platforms including IBM WebSphere, Oracle WebLogic, JBoss, Apache Tomcat and Oracle GlassFish. Business and IT leaders undertook four key initiatives in support of business growth: streamline processes and strengthen security under a centralized application server platform; add a mobile platform for accelerated processing and approvals of credit applications; expand the Company’s BPM framework to further enable rapid million dollar transactions; and enhance security with a forensic tool to monitor incidents across the enterprise. As a longtime IT partner to this customer, Prolifics worked with the Company and IBM to build a solution that would best address the Company’s growing business needs – an initiative to build a cloud platform that would ensure speed, consistency and repeatability across the enterprise. The solution centers around four KPIs: reducing deployment times from months down to minutes, transitioning from quarterly to on-demand development rates, tracking and minimizing Failed Customer Interactions and enhancing on-demand compliance through automated reporting. Prolifics and IBM developed a solution built on IBM Pure Application Systems and supported by IBM WebSphere Application Server, Worklight, QRadar, WebSphere MQ, WebSphere Process Server and WebSphere Process Center that would enable the new mobile functionality, integration, process management and security capabilities. With the solution in place, Prolifics' customer gains millions of dollars in cost savings as well as improved productivity.

Interested in learning more?
To read more about IBM PureApplication Systems, click here.
To learn more about this customer story or connect with a Prolifics subject matter expert, contact solutions@prolifics.com.

Visit www.prolifics.com for more information.

Wednesday, September 25, 2013

Session Replay: Estimating your Process Projects presented at FSOkx BPM Forum

Earlier this month, Prolifics' Matt Yeager and Anant Gupta hosted a session at the FSOkx 4th Annual Business Process Management and Technology Innovation Forum. The presentation focused on how business leaders can reduce the guesswork associated with the estimation process by considering the following questions:
  • What is it that you are estimating?
  • How big is the thing you are estimating?
  • What baselines are you using for your estimates?
  • Should you be estimating top down, bottom up or somewhere in between?
  • How do your estimates tie to your project plan?
  • Do your estimates reflect ROI and business value?
If you missed the forum, you can catch a replay of the presentation here!

Prolifics Session: Estimating your Process Projects


Interested in taking a deeper dive? Connect with us today!
Matt Yeager, Manager of Advisory and Consulting at Prolifics
Email - myeager@prolifics.com
LinkedIn - Matt Yeager

To learn more about Prolifics, visit www.prolifics.com.

Wednesday, August 21, 2013

Integrating WebSphere Portal 8 With IBM Connections Using Connections Portlets - Part 3

This article is the third and last part of the article which explains details of integrating websphere portal with connections using web app integrator portlet and connections portlets. In the first two parts we saw details of integrating portal with connections using web app integrator portlet.This part explains details of integrating WebSphere Portal with connections using IBM connections portlet.

IBM CONNECTIONS PORTLETS
The IBM Connections Portlets for WebSphere Portal delivers the IBM Connections rich set of social software services for use within a WebSphere Portal environment. WebSphere Portal users can integrate the Activities, Blogs, Bookmarks, Communities, Forums, Profiles, and Wiki applications of Connections along with a Tag Cloud portlet for quick filtering in composite applications. The approach here is to display the Connections in a portlet in portal page.

These portlets are part of the collection of portlets that are available on the Solutions Catalog on IBM Greenhouse and can be downloaded from the following location. Here I used portal version 8 and IBM Connections 3.0.1.

https://greenhouse.lotus.com

After unzipping the downloaded IC3011_Portlets_20121211.zip, you can see that there are two folders available, IC3011_Portlets and IC3011_Portlets_refresh.IC3011_Portlets_refresh is used to integrate with portal 8.This has a PAA file SNPortlets.PAA which has all the connection portlets bundled into it. Once we install this PAA file we can access connections in portal environment.

CONFIGURING AND INSTALLING CONNECTIONS PORTLETS
To install, deploy and configure the connections portlets we need the following steps.


1.  Import a certificate to support SSL
Log into the WebSphere® Application Server Integrated Solutions Console.

Navigate to Security -> SSL certificate and key management -> Key stores and certificates.
Add the certificates to the appropriate trust store as configured in SSL Configurations. To view the SSL configuration and determine the appropriate trust store, navigate to: Security -> SSL certificate and key management -> SSL configurations -> NodeDefaultSSLSettings -> ['Trust Store Name']
For example, in a standalone deployment you navigate to NodeDefaultTrustStore -> Signer certificates for adding certificates. If NodeDefaultSSL Settings points to 'CellDefaultTrustStore', you add a certificate to 'CellDefaultTrustStore'.
Click Retrieve from port. Enter the host and SSL port used by your Connections server.
Click Retrieve signer information and Save.

2.  Run the Following Config Tasks
ConfigEngine install-paa -DPAALocation=C:\WebSphere\wp_profile\paa\SNPortlets.paa -DWasPassword= [was-admin-pwd] -DPortalAdminPwd=[portal-admin-pwd]

ConfigEngine deploy-paa -DappName=SNPortlets -DWasPassword=[was-admin-pwd]  -DPortalAdminPwd=[portal-admin-pwd]

ConfigEngine configure-SNPortlets -DWasPassword=password -DPortalAdminPwd=password

You can configure more parameters by logging into  admin console and add values in WPConnectionsIntegration Service as in step 4.

3.  Changes in Themes
Add the following two modules to your theme profile by editing the corresponding json file. These json files are in your profiles folder of theme. If you are using default theme profile, edit profiles\profile_deferred.json. If you are using a custom profile, add the modules to that profile.

wp_liveobject_framework, dijit_form_17

4.  Configuring WAS admin Console 
Log into the WebSphere Application Server Integrated Solutions Console. Navigate to Resources -> Resource Environment -> Resource Environment Providers.
Set the following five parameters in the custom property, WPConnectionsIntegrationService (create the property if it’s not already there)

blogsHomepageHandle, conversion, tagSearchType, emailSetting and globalBaseURL, the base URL to the Connections server
Configure Authentication

For this article, I used basic authentication instead of SSO as its simple and the setup is common for all environments. SSO setup varies in each environment depending on your access gateway.

Set the authenticationMethod property to basicAuth in the file, lcaccelerator.properties, located at (<wp_profile_root>\installedApps\<cell name>\PA_WPF.ear\snor.pf.portlets.war\WEB-INF\lcaccelerator\properties\ lcaccelerator.properties)

When using basic authentication for the portlets, every user must type in their personal credentials manually in the personalize mode of the portlets or shared credentials can be supplied from the Credential Vault.

If you are planning to use SSO environment, please refer IBM connections wiki from the following location. It has details for various types of SSO environment.

www-10.lotus.com/ldd/lcwiki.nsf

5.  Stop and Restart Portal Server

TESTING IN PORTAL
Once all the configurations are done you can login to the portal. You will see the IBM connections tab with the pages profiles, blogs, forums etc as in the below image. In the edit mode, select Personalize option and enter the username and password in the portlet as below and save it.

You need to enter the username and password in only one portlet and its transmitted to all other portlets. Logout of the portal and login once again and you can see all the connection portlets are displayed with the corresponding connection content. Profiles portlet is displayed in the image below.


This completes the integration of WebSphere Portal and Connections. To learn more about Prolifics, visit www.prolifics.com.

Sanju Varghese, a Senior Consultant for Prolifics, is an experienced Portal and JEE Architect. He was with IBM Global services for six years delivering WebSphere Portal based solutions to its major customers and is certified in Java,IBM WebSphere Portal and BPM . He has performed different IT roles ranging from being an Architect, Consultant, Technical lead and Software developer for several large projects .Currently his main focus area is integrating Portal with Collaboration software. Besides specializing in IBM technologies he likes reading, traveling, watching and playing cricket. He holds a Bachelors Degree in Computer Engineering from Pune University, India.

Monday, August 12, 2013

Integrating WebSphere Portal 8 With IBM Connections Using Web App Integrator - Part 2

This article is the second part of the article which explains details of integrating IBM WebSphere Portal with connections using web app integrator portlet and connections portlets. In the first part we saw an introduction of web app integrator portlet and installed a web app integrator portlet in the portal server and tested successfully. This part explains the steps needed to connect the installed web app integrator portlet with IBM Connections.

CONFIGURING PORTAL FOR INTEGRATION
From portal side we need to do two steps as we did in the first part for testing the WAI portlet.

  • Create a URL page in the portal using the New URL button, with unique name and set the URL value in the page property, in advanced section->HTML, set URL of the connections server.
  • Enter the unique name in the web app integrator portlet to generate the script.


 
CONFIGURING CONNECTIONS
We need to add the generated script in the header.jsp in connections server.If you search for header.jsp in connections you will see multiple header.jsp files in source directories and all are identical. 

We need to create a header.jsp in the customization directory. Customization directory is mentioned during installation. On Microsoft Windows, if you accepted the defaults during installation, the customization path is C:\Program Files\IBM\LotusConnections\data\shared\customization. This can be found in Websphere admin console of connections server. Customization directory is stored as the value of the websphere environment variable, CONNECTIONS_CUSTOMIZATION_PATH.
  1. Create a directory called "templates" at <customizationDir>\common\nav\.
  2. Copy the header.jsp file from any of the application source directory to the new templates directory. For example, copy the header file from <WAS_home>\Activities.ear\oawebui.war.\nav\templates.
  3. Copy the script generated from the WAI portlet in the header.jsp
  4. Restart the connections server.
Another websphere environment variable, CONNECTIONS_CUSTOMIZATION_DEBUG can be found in the admin console as well. Set the value of this variable to true and then we should not need to restart connections server for jsp changes.

TESTING INTEGRATION 
If you have properly configured till this point, then you can go back to portal. Navigate to the URL page you created, here Connections Integration page, you should see the connections inside the portal. 


We have completed the integration of WebSphere Portal and Connections. I used Portal 8 and IBM Connections 3.01 for this article.You should be able to use it for other versions as well with minimal or no changes.

This article explains the basic integration techniques between portal and connections.For an excellent user experience and to make a seamless transition from application page to application page you should work with a theme developer to place additional theme modules.This will enhance the smooth transition between product pages and produce the desired user experience.

In the next part, Part 3, we will see how to integrate IBM WebSphere Portal and Connections using the Connections portlets.


Sanju Varghese, a Senior Consultant for Prolifics, is an experienced Portal and JEE Architect. He was with IBM Global services for six years delivering WebSphere Portal based solutions to its major customers and is certified in Java,IBM WebSphere Portal and BPM . He has performed different IT roles ranging from being an Architect, Consultant, Technical lead and Software developer for several large projects .Currently his main focus area is integrating Portal with Collaboration software. Besides specializing in IBM technologies he likes reading, traveling, watching and playing cricket. He holds a Bachelors Degree in Computer Engineering from Pune University, India.

Friday, August 9, 2013

IBM Operational Decision Manager 8.5 Upgrade Series Part 1 – Decision Center

In this series of articles we’ll follow along with an upgrade from a relatively complex customized 7.1 installation of IBM WebSphere Operational Decision Manager to IBM Operational Decision Manager (ODM) 8.5.

ODM 7.5, 8 and 8.5 bring us a number of valuable features and capabilities:

Highlights:
  • Enhanced business user experience with new change management, governance capabilities & business console
    • Branching and merging of branches
    • Business focused simplified collaborative environment o Governance framework for managing releases and multiple referenced rules sets (services)
  • The ability to execute business rules from mobile applications
  • Events capabilities, the addition of Complex Event Processing CEP capabilities to ODM
  • Business ability to test rules with complex results using excel scenarios
  • XXX Mainframe
  • Operational enhancements
    • Ability to decouple the management console from the execution environment and manage embedded rules remotely.
    • Enhanced automatic generation of decision point web services HTDS

For reference on features:
What’s new in version IBM ODM 8
What’s new in version IBM ODM 8.5

The above new capabilities and potential end of life for 7.1 are usually drivers to upgrade to a later version.

In this article we’ll look specifically at migrating Decision Center formally known as rule team server, in later articles in this series we’ll look at the rules migration, testing, migration of customized decision validation services, decision server and more.

Migration of Decision Center Data (fka RTS)
Decision Center stores its information in a database, from version to version there will be changes made to the database structure and IBM provides scripts to assist with migrating information to the latest versions. The first issue we ran into was that the ANT migration documentation provided was confusing. Initially we thought that the script needs a running 7.1 decision center server and to be pointed to an 8.5 database. We edited the properties file in the bin folder called teamserveer-anttaks.properties and add the OldDatabaseSchemaName which is the 7.1. schema and an output file location where you would like the SQL to be generated. Datasource name for 7.1 and server URL and login credentials. The migration script gives a number of options to migrate various aspects, for the schema we’ll use gen-migration71-script option to the ant command.

DC Database migration Issue 1: Documentation
The following exception is thrown when we try to run the conversion:
[gen-migration71-script] ilog.rules.teamserver.model.IlrConnectException: Could not deserialize result from HTTP invoker remote service [http://localhost:8080/teamserver/
remoting/session]; nested exception is java.io.InvalidClassException: ilog.rules.teamserver.model.IlrSessionContext; local class incompatible: stream classdesc serialVers
ionUID = 4729857957054555657, local class serialVersionUID = 8906827035412667122

Solution:
After some experimentation and chatting to IBM, we figured that the old schema has to be on the same database server as what the 8.5 DB server is using and that the ant script should be pointed at the 8.5 decision center server and not the 7.1 RTS.

DC Database migration Issue 2: Inconsistent Data
There were two issues missing data and duplicate names that now need to be unique in the 8.5. version. We are not sure of the reasons why data was missing, but the script filed on missing data in the artifact tables where information existed in version tables for some rules version as part of a baseline. The report names for DVS simulations where the same and needed to be made unique because of a new table constraint.

Solution:
The solution was to delete the versions of the rules that were missing from the artifact table because the baseline was in recycle bin.
delete from baselinecontent where version=item.ID;
delete from version where id=item.ID;
and
Update SCENARIOSUITEREPORT set NAME=item.NAME

Issue 2: Secure Team Server (HTTPS) 
If you are attempting to connect to a secure SSL enabled server then you may get an exception. SSLHandshakeException

Solution:
In order to get around this exception you’ll need to download and then import the certificates. To download the certificates brows the home page of decision center and click view certificates in the browser and download a .cer file. The next step is to import the certificate to your local JRE trust sore
e.g. 
keytool -import -keystore "C:\Program Files\Java\jre7\lib\security\cacerts" -file C:\Users\abcd\Desktop\ RTS_Certficate.cer

Customized Branding 
The first port to 8.5 was the branding customization. This is generally used in a Multi-tenants scenario. A custom skin was added in the skin-faces-config.xml under the web-inf folder with the CSS and properties copied to the skin and customized. Additional tabs where also added as part of the customization where a file tabs.jsp was added to the custom folder.

Issue 1: CSS & Properties Merge
The custom Message.properties can’t be ported directly as some new keys are added to the 8.5 version like with CSS changes a file diff and edit will need to be done.

Solution:
The best resolution was to compare the properties & CSS file from 7.1. to the custom properties and pull the differences out which can be applied to version 8.5 new skin files directly.
We have to do a file diff on the differences between the CSS files.

Issue 2: JSF version changes
The have changed the underlying JSF 1.1 to myfaces 1.1.5

Solution:
Add the following liens of code to your faces-config.xml in the web-inf folder of teamserver war.
<renderer>
    <component-family>javax.faces.Command</component-family>
    <renderer-type>javax.faces.Button</renderer-type>
    <renderer-class>org.apache.myfaces.renderkit.html.jsf.ExtendedHtmlButtonRenderer</renderer-class>
</renderer>
<renderer>
    <component-family>javax.faces.Command</component-family>
    <renderer-type>javax.faces.Link</renderer-type>
    <renderer-class>org.apache.myfaces.renderkit.html.jsf.ExtendedHtmlLinkRenderer</renderer-class>
</renderer>

Extension Models
If you have any custom extensions model, make sure that you upload the .brdx and .brdm files on the Decision Center Installation Wizard screen.



API Changes
We encountered a number of changes in the API’s, this is not an exhaustive list but the ones that we ran into as part of an upgrade/migration.

Migration of Decision Center from 7.1 versions to 8.5 requires updating the APIs as well as any customized skin/branding being used.

API and Method Changes
  1. API changed from ilog.rules.teamserver.web.servlets.IlrDownloadServlet to ilog.rules.teamserver.web.servlets.IlrTestingDownloadServlet.
  2. IlrApplicationException needs to be handled if you are calling IlrDefaultSessionController.onCommitElement.
  3. IlrApplicationException needs to be handled if you are calling IlrDefaultSessionController.elementCommitted.
  4. API changed from ilog.rules.teamserver.web.servlets.IlrDownloadUtil to ilog.rules.teamserver.web.servlets.IlrDownloalUtil.
  5. IlrWUtils.getTimeZone() now returns an object of com.ibm.icu.timezone.
The next parts of this article will cover decision server, decision validation services customizations and other migration items.

To learn more about Prolifics, visit www.prolifics.com.


Ryan Trollip is Prolifics’ Decision Management Practice Director. Ryan is an experienced solutions architect and implementation lead with a strong background in business driven and improvement focused solutions with an emphasis on Decision Management. Ryan has a proven track record of delivering successful projects, with over 15 years of experience covering project management, enterprise architecture, account management, and design/programming in a decision management context. Prior to joining Prolifics, Ryan was an Architect and Technical Account Manager, independently and for IBM/ILOG, focused on leading the delivering complex decision management projects.


Amrinder Singh Brar is a Technical Lead for Decision Management at Prolifics. Amrinder has 8 years of experience in software development with 4+ years of extensive experience in ILOG/BRMS world. His key expertise lies in implementing Decision Management solutions in Banking, Telecom, HealthCare and Travel domains. Prior to joining Prolifics, Amrinder worked as a Solution Consultant for various ILOG migration and implementation projects with Telecom and IT majors.

Thursday, August 8, 2013

Strategy for Security: A Pure Bargaining Model

The Stalemate
Strategy development can be thought of as a form of bargaining in my opinion, where security and audit, each with a stake in the successful implementation of the strategy, arrive at the table with specific agenda, putting forth and withdrawing arguments, driven by expectations of what the applications, infrastructure and support teams will accept or reject, and depart the table with an agreement that satisfies fewer goals than what they hoped to achieve to begin with.

Formulation of a roadmap for enterprise security is not concerned with the efficient application of forces like power and influence, as much as with the exploitation of potential synergies coming from the combined gain at stake for all involved. It is concerned with the possibility that particular architecture-driven operational outcomes are better (not worse) for all parties involved.

‘Pure’ Bargaining
Achieving consensus on a strategic roadmap for enterprise security can be modeled as a form of pure bargaining- a term used to describe bargaining in which each party is guided mainly by his expectations of what the other will accept. With each party guided by expectations and knowing very well that the others are guided by expectations too, these very expectations begin compounding achieving an effect that leaves only one exit path, someone making a final and sufficient concession to resolve the deadlock.

This result is quite contrary to the fact that actually, there is a range of possible architectures of which any single one is acceptable to all parties than no agreement at all. To insist on any one of the agreeable alternatives is a form of pure bargaining, since either party would take less than their dream solution than nothing at all because that would only cost money, and it leaves the firm no better off than what it started with. Either party would take 'less' also because it knows that 'receding' to reach agreement is also an option at any point in the process, since there is no reprimand for agreeing after disagreeing!

The underlying tactical approach is especially suited to Security because the essence of pure bargaining tactics employed is the voluntary and irreversible sacrifice of a position of strength in order to reach a point of advantage, even though the advantage is somewhat diluted. It is the paradox that the power to limit the adversarial parties stems from an ability to confine oneself to a smaller range of choices- to give up some freedom of choice to gain leverage in a pure bargaining situation.

Quick Case Study:
Authentication Strategy Case in point is creating a strategy for achieving seamless authentication across the enterprise. The applications architect might not want a reverse proxy solution for an authentication gateway because he already owns a farm of proxy servers that service web requests for his applications. He prefers an approach that augments the existing technology instead of stacking another farm of reverse proxies in front! The security architect advocates the use of a virtualized object space that a reverse proxy enables you to create because it helps manage authorization in the long run. The audit manager cares more about the security perimeter than the specific technology stack within the perimeter. The infrastructure architect wants homogeneity in hardware across the technology stack to ensure his team has a manageable learning curve in order to support the solution. The helpdesk manager is worried about how users might be impacted no matter which alternative is picked as the authentication architecture.

In the scenario depicted above, the application architect is negotiating from a position of strength because he owns the applications, the infrastructure architect is also negotiating from a strong position because his stake is already in the ground- a certain type and model of hardware is powering the business applications! However, they don’t just get their way because the security architect has a point too. Creating a single reverse-proxy based gateway eliminates any instrumentation at the proxies the applications architect owns, and also provides a long run alternative to finer grained authorization should the business need it. The audit manager might appear to be neutral to the discussion, but knows that adding a reverse proxy widens the security perimeter and requires thorough security compliance certification of the reverse proxy servers. This is more work and more risk for an otherwise smoothly running operating firm!

Strategic Moves shrink the ZOPA
I would be remiss if I did not talk about how the perceived bargaining set for each of the participants changed at each bargaining step, and also how the parties who were in a position of strength changed their expectations in observation of how well others accepted or rejected their ‘shifting’ bottom-line or ‘reservation price’ demands. The zone of possible agreement (ZOPA) initially is very large as all parties in positions of strength seem to have inflated perceptions of their non-cooperative alternatives and won’t give in without a fight. Pure bargaining tells us that someone has to concede for the stalemate to be resolved in the favor of achieving a ‘surplus’ outcome- one that results in all parties gaining something by participating in the process. At each bargaining step the zone of possible agreement shrinks as the weaker participant, the security architect evaluates the expectations of the stronger parties, navigates the terrain, uses his expertize to model impact to business, to user and to long run utility of choosing between the different alternatives to not only improve his alternative but also to worsen the other side’s alternatives at the same time.

The Concession
Experience reveals that the application and infrastructure architects have to let go, albeit selectively, of their biases towards pure proxy and homogenous hardware to accommodate the setting up of a reverse proxy as best response for a segment of applications duly benefiting from one, and an alternative solution like a plugin for proxy servers that is a best-response to another segment of applications. The audit manager has no choice but to add to his inventory of tasks the ‘seal-and-certify’ of all new components to avoid triggering an end-of-year audit. The helpdesk manager also will duly ask for process flow and user impact analysis from all parties concerned. Examples illustrating pure bargaining tactics abound in security strategy formulation.

To learn more about Prolifics, visit www.prolifics.com.

Javed Shah is a Practice Director for Security at Prolifics with more than 12 years experience in identity and access management architectures. He has broad exposure developing identity and access management solutions, and system software components that deliver reliable data security, web enablement and user lifecycle management services to customers. Before joining Prolifics, Javed founded and ran a professional services company in India for 6 years. Spanning over a decade, Javed has led identity management projects to successful exits at Nestle, University of California San Francisco, Kaiser Permanente, ABM Industries, BRE Properties, UPS, Tampa General Hospital and E*TRADE Bank. He was also the leader of the ITIM Level 3 defect resolution and analysis team in India where he was responsible for handling all customer defects for North America and Asia. Javed holds a Bachelor’s degree in Computer Science, a Certificate in Implementing and Managing an Enterprise Architecture using the Zachman Framework and the CISSP certification. He is also currently pursuing an MBA from the Haas School of Business, University of California Berkeley.

Tuesday, August 6, 2013

Integrating IBM WebSphere Portal 8 With IBM Connections Using Web App Integrator – Part 1

IBM Connections is a leading enterprise social software which provides social networking tools for businesses. Existing IBM WebSphere portal users are exploring ways to seamlessly integrate Connections into Portal. This article explains details of integrating WebSphere Portal with IBM connections using Web App Integrator portlet and Connection portlets.

In the first part of this blog series, I will give an introduction of Web App Integrator portlet and the necessary steps to install it in WebSphere Portal Server. In the second part, I will provide details of the configuration of Web App Integrator and IBM connections. The third and final part of this blog series will include details needed to install and configure Connections portlets.

WEB APP INTEGRATOR (WAI)
Web Application Integrator for IBM WebSphere Portal is a solution which allows external web applications to be integrated with WebSphere Portal. Generally there are two approaches used to access external web applications in portal. The first approach is to display the external application in a portlet in portal page by developing a custom portlet. The second approach is to include the entire external application inside a Portal page. WAI uses the second approach with more flexibility than the existing web clipping methods. There are no viewing area constraints (no scroll bars) in WAI and all Java scripts and links within the integrated Web App continue to function as expected. The user experience suggests that the user is still within the Portal environment even though they are, in reality, natively accessing Connections.

INSTALLING WEB APP INTEGRATOR IN PORTAL
Web Application Integrator portlet is available as catalogue version from the IBM Lotus green house. WAI can be downloaded from the following location. Users need to register with the site before downloading.

https://greenhouse.lotus.com

Once unzipping the downloaded file, webappintegrator.zip, users can see folders for the various portal versions. It has WAI portlets for portal 6 version to current 8.0.0.1 version. In earlier versions WAI was packaged as a web archive (war) file but for recent versions it’s packaged as portal application archive (paa) file. To install paa files we need portal solutions installer (si) but portal 8 has an inbuilt solutions installer so this is not needed for version 8.

To install deploy and configure the WAI we need to follow the below steps:
1. Determine the correct version of the WebAppIntegrator portlet that should be used (8 or 8.0.0.1) and extract the files in that folder.
2. Copy WAIPortlet.paa to a temporary directory on your portal server.
3. Make sure WebSphere Portal Server is running.
4. Open a command prompt window and cd to <wp_profile>\ConfigEngine

Execute the following config tasks for Windows:


Here temp dir name is the location of paa file (<wp_profile>\paa), was pwd and wps pwd are the was admin password and the portal admin password respectively. For UNIX its ./ConfigEngine.sh instead of ConfigEngine.bat. Restart the portal server.

TESTING INSTALLATION IN PORTAL
Before integrating with the connections, we can do a quick test to see WAI is installed as expected. Login to portal as administrator and then navigate through the administration-> Manage Pages, you can see the WAI portlet as below, with a button to generate the HTML script tag.



To test we need the following steps:
1. Create a test.html with following contents and place in the wps.war(located at <wp_profile>\installedApps\<cell name>\wps.ear\wps.war)
2. Create a URL page in the portal using the New URL button and set the URL value in the page property, in advanced section->HTML, set value as http://<servername>:<port no>/wps/test.html.
3. Create a unique name, as shown in the WAI test Page above.
4. Generate script through WAI portlet using the unique name.
5. Place the generated script in the test.html immediately after the beginning <body> tag.

Now you can test by navigating to the WAI test Page. You will see the page appear as below.


This concludes the first part of this blog series. In Part 2, we will see how to integrate the installed WAI with IBM Connections.

To learn more about Prolifics, visit www.prolifics.com.

Sanju Varghese, a Senior Consultant for Prolifics, is an experienced Portal and JEE Architect. He was with IBM Global services for six years delivering WebSphere Portal based solutions to its major customers and is certified in Java,IBM WebSphere Portal and BPM . He has performed different IT roles ranging from being an Architect, Consultant, Technical lead and Software developer for several large projects .Currently his main focus area is integrating Portal with Collaboration software. Besides specializing in IBM technologies he likes reading, traveling, watching and playing cricket. He holds a Bachelors Degree in Computer Engineering from Pune University, India.

Is Your Integration Project Costing More Than It Should?

Over the years I’ve been directly involved in a great number of IT software development projects at Prolifics. And without exception these have all involved integration of a number of software components – whether it is integrating back-end systems such as SAP, plugging in external services from business partners, or the more relatively straightforward integration of new business systems or processes to in-house business services.

And something I’ve seen all of these projects having to deal with is the fact that software integration introduces some very real dependencies in development and test plans and environments – all of which require a fair amount of attention and a non-trivial amount of coordination and resources which could impact the time and budget estimates.

But what if there was a way to reduce the cost and planning impact of these dependencies?

Monica Luke and I tackle this topic in a new IBM whitepaper that discusses the usage of business service virtualization and testing techniques and tools to remove bottlenecks and reduce costs in building today’s modern, interconnected applications.

Many of our IBM customers will be interested in the discussion on example service architecture as well as mobile and portal domains. We briefly contextualize the value for customers using products such as WebSphere Application Server, WebSphere Enterprise Service Bus, IBM Integration Bus (previously WebSphere Message Broker) and MQ, WebSphere DataPower, IBM Business Process Manager, WebSphere Portal Server and IBM Worklight.


Read our whitepaper here:





Use Service Virtualization to Remove Testing Bottlenecks






How do I get started?
This is a great time to highlight Prolifics' very own IV&V (Independent Verification and Validation) practice. IV&V is an important aspect of our global delivery model, providing customers with end-to-end testing solutions that help to improve productivity and quality while reducing overall cost of all software development activities.

Our team has extensive experience in testing large, interconnected, critical business applications, and is ready to help you reinvent the way you approach your testing – backed by a service framework and testing accelerators that have been field tested and refined for over a decade.

Interested in taking a deeper dive? Connect with me!
Twitter - @greg_hodgkinson
Email - ghodgkinson@prolifics.com
LinkedIn - Greg Hodgkinson

To learn more about Prolifics, visit www.prolifics.com.


Gregory Hodgkinson is the Lifecycle Tools and Methodology Practice Director at Prolifics and an IBM Champion for Rational. Previous to that he was a Founder, Director, and the SOA Lead at 7irene, a visionary software solutions company in the United Kingdom. He has 16 years of experience in software architecture , initially specializing in the field of component-based development (CBD), then moving seamlessly into service-oriented architecture (SOA). His extended area of expertise is the Software Development Lifecycle (SDLC), and he assists Prolifics and IBM customers in adopting agile development processes and SOA methods. He is still very much a practitioner, and has been responsible for service architectures for a number of FTSE 100 companies. He presents on agile SOA process and methods at both IBM (Rational and WebSphere) and other events, has also co-authored a Redbook on SOA solutions, and contributes to DeveloperWorks.

Tuesday, July 23, 2013

Prolifics and IBM's Digital Experience Software: Enhance, Extend and Enrich...

by Niral Jhaveri, Vice President,  User Experience, Prolifics

Hopefully you got a chance to attend IBM’s recent digital experience launch.  I know we’re excited about the evolution in IBM technology which will provide our customers even better ways to personalize content for their site visitors. It also excels in the analyzing and optimizing of site and campaign performance to drive desired results.  For example, With IBM’s Digital Experience Software, we can further help our portal and collaboration customers develop and manage dynamic content and rich media while delivering multi-channel applications - providing the ultimate, optimized customer experience. Prolifics is dedicated to providing customers with solutions that promote collaboration, empower conversation and bring together communities. Leveraging IBM’s Digital Experience technology stack, Prolifics can assist companies:


Enhancing
Upgrade to IBM Digital Experience
Migrate systems from older versions to IBM Digital Experience

Extending
Multi-channel extension of applications to mobile and tablets
Extend business applications beyond conventional desktop
Creating a mobile strategy

Enriching
Conduct health checks on aging applications
Provide tuning to improve performance and improve application longevity

Check out our latest video to see Watch for more on how Prolifics empowers businesses with mobile, social & collaboration.




Friday, July 19, 2013

Avoiding Future Interface Changes in IBM ODM Rule Services

When building and deploying an IBM ODM[i] business rule based service, one aspect to carefully consider is the structure and composition of the rule service interface; what will become the WSDL in a SOAP/WS HTDS[ii] implementation. Because all service consumers are dependent on this structure, frequent changes to this interface can result in significant rework for all consumers as well as code changes in the rule service itself. Additionally, supporting each interface change introduces change dependency between all consumers and the service. There are ways to avoid this complexity by introducing more flexibility in the service interface. However each technique comes with its own set of tradeoffs that must be carefully considered.

There are three basic approaches. To illustrate, imagine a rule service that determines if a company will accept a transaction from a US customer. In the initial and most simple version, the company policy is to refuse a transaction from any customer under the age of 21. Age is the only decision criteria. In a future version of the same service, the decision criteria becomes more complex when it is determined that the threshold age should vary based on the customer’s US state of residence.


Small, fast, and simple

In this approach the rules service interface contains only the data required by the service to perform the implemented decision point. The structure of the data in the interface is flattened with respect to the decision point implementation and attribute names represent the business decision criteria. The objects in the interface can be used directly in the rule BOM and the default verbalizations will likely be sufficient. No additional mapping of data is required. Marshaling/de-marshaling and transport overhead is kept to a minimum allowing very rapid service response. In later versions of ODM[iii], a REST implementation becomes a viable option with a small number of specific input attributes related to the decision and low transportation overhead. This approach is fast to implement, execute, and easy to change in initial development, but is the least flexible to future change once deployed. Any future additional data requirement in the decision point will require an interface update to accommodate the change. This technique is best used to quickly develop a rule service with rapid changes to the decision criteria while still in development.

In our example to identify customers below the age of 21, the interface initially may only consist of a customer identifier and the customer’s birthday. However our future implementation that observes the customer’s US State of residence will require a service interface change to add this attribute. Concurrent execution of old and new versions of the rule service requires strict versioning of the WDSL and therefore the rule app / rule set.


Enterprise Data Model

In this approach, the rule service interface consists of the entire enterprise data set that may be used to perform the implemented decision point, even if most of the data is unobserved in the initial rule implementation. In other words, the caller passes everything it currently knows about the subject of the decision in the event future rules might need additional information. This approach provides a more comprehensive data interchange, but is limited by the quality of the data model. This approach is most commonly used with vertical industries that have a well defined and widely accepted object model such as those defined by standards bodies such as ACORD, MISMO, EDI Standard formats, HL7, et al. The objects in the interface typically contain several levels of hierarchical data and maps poorly to a flatter, business-oriented BOM desirable in business rule implementations. This commonly necessitates at a minimum custom BOM verbalizations, and frequently mapping of the interface object to a flatter and less normalized form; all adding to the time and effort in rule implementation. However, the advantage is that future rules are less likely to require an interface change IF the enterprise data model is complete, stable, and universally accepted. Because the interface is typically large, in the case of most SOAP web services and message-based architectures, marshaling/de-marshaling, transportation overhead, and remapping become the overriding performance constraints rather than rule execution times.

Using our example rule service implementing a decision point to identify customers below the age of 21, the initial interface would consist of everything about a Customer tracked in the enterprise data model. This will likely include the customer’s birthday allowing age to be derived. When the service needs to expand in the future, allowing the age threshold to vary by customer location, no interface changes will be required if the enterprise model already includes the US state of the customer’s residence. A ‘rules-only’ change can be made to add BOM mappings for the newly required attributes and the rules written using these new attributes. This approach should be avoided if the enterprise data model changes frequently or is unlikely to include future required data elements. Frequent changes in the enterprise model result in changes to the rule service interface even when the model changes do not affect the current rule implementation in order to remain consistent with all rule service consumers. And using this technique requires all consumers and rule services to implement the same version of the enterprise model. If the data model is unstable, managing this dependency alone can negate any flexibility gained with the more comprehensive initial interface.


Generic Key/Value Pairs

In this approach, the rule service consists of a generic list of key/values pairs commonly implemented as a Map or map-like structure. The ‘key’ is a string describing a business attribute name and the ‘value’ is the value of that attribute. This allows any data to be passed to through the rule interface without structural changes in the rule service interface itself and represents the most flexible interface style. Using the previous example of a rule service determining transaction acceptability based on customer age, the interface could initially have a list of keys ‘customerID’ and ‘customerBirthDate’ with their corresponding values. If in a future version, customer location becomes required, the caller would simply add a key ‘customerLocation’ to the list in the interface and provide a value for this information to the rule service in the granularity required. However, the rule service needs to be aware of all the possible key values. These are typically maintained manually as a static or dynamic enumeration in the rule source so business attributes may be properly mapped to BOM vocabulary. This has a distinct advantage over the enterprise data model scenario in that only the data required in any given version of the service is actually passed through the interface to the rules.  This keeps marshaling and transportation overhead low, but still allows for new data to be added without changes to the structure of the rule service interface.

However using this approach means it is more likely that the callers of the rule service will need to make corresponding coding changes to support future data requirements in the rule service. This will often invalidate the advantage that the rule service may be changed without java code alteration. An additional concern with this approach is the loose typing and late binding of the interface elements. While this provides flexibility in the data passed, it effectively places additional run-time validation requirements on the rule service to ensure that the interface data expected is actually provided and typed correctly. Finding the rule service implementation out of sync with the implementation and expectation of multiple consumers is a common problem with this approach.
--------------------------------------------------------------------------------
[i] Operational Decision Management, formerly known as IBM ILOG JRules or WODM, WebSphere Operational Decision Management
[ii] Hosted Transparent Decision Service, a feature of IBM ODM that exposes a ruleset as a SOAP Web service
[iii] REST API supported in ODM V8.5 and later
Written by Lawrence Terrill, Technical Lead, Prolifics Business Decision Management Practice

Wednesday, July 17, 2013

Watch This! Google Hangout: Using Application Performance Diagnostics Tools to Increase the Effectiveness of Shift-Left Testing

How many times do software developers need the ability to debug their application on a live server? Most developers have great code profiling tools used against their local development working environment, though are often challenged to get similar data from a fully deployed application environment. The unit tests may have passed and given good results in the local development environment, though has issues in the fully deployed environment.

We at Prolifics are using the newly released IBM Application Diagnostics Lite tool to provide our own and customers software developers a quick easy tool to use in troubleshooting issues on a single application server running IBM WebSphere and Portal. This tool provides a quick deployment to provide insight into the execution of requests running on the App Server, for real time viewing or recording for offline review. Our WebSphere and Portal developer, architects and administrators are always interested in new and valuable troubleshooting tools.

One of the best features of APD Lite, is that it is a free download from IBM, only requiring a valid ID on the IBM.com website. You can carry this tool on a USB memory stick and provide quick value to your project.

I  recently participated in a Google Hangout with IBM experts where we discussed the benefits of IBM's application performance diagnostic tools and how they can help application teams reduce time and effort associated with shift-left testing.

Click here to watch!

Abstract: 
The purpose of shift-left testing is to reduce costs by identifying and eliminating issues earlier in the application development life cycle. Compare that to the purpose of application performance diagnostics tools, which is reduce the time and effort required to identify and resolve issues, and it's easy to see how these tools could provide value when used as part of one's shift left testing practices. Join Redmonk Analyst Donnie Berkholz as he leads a discussion with IBM and Prolifics experts on the how IBM's application performance diagnostic tools can help application teams test their code early and often. We'll discuss the concept of shift-left testing and identify use cases in which tools can be used to reduce the time and effort necessary to effectively implement shift-left testing.

Roundtable panelists:
  • Dan Kern, Solution Architect, Prolifics
  • Donnie Berkholz, RedMonk Analyst
  • Dan Berg, IBM Chief Architect at DevOps
  • Lindsay Farmer, Release Manager, Application Performance Diagnostics
  • Joydeep Banerjee, Architect, Application Performance Diagnostics
  • Todd Kindsfather, Product Manager, Application Performance Diagnostics
Want to connect? Email me at dkern@prolifics.com.

To learn more about Prolifics, visit www.prolifics.com.
Dan Kern is a Solution Architect at Prolifics and an IBM Champion for Tivoli. Dan joined Prolifics as a Senior WebSphere Administrator and Performance Tuning expert. Over the past 7 years, he has held roles as a Technical Solution Director, Practice Director for the Automation and Systems Management area and now helps customers realize the full potential of their software investments as a Solution Architect. Dan is highly regarded in the Tivoli SAPM product and business areas and continues his focus on top quality solutions.

Wednesday, July 3, 2013

Using HTDS with Java XOM in IBM Operational Decision Manager 8

Until version 7.1, IBM ILOG-JRules had Hosted Transparent Decision Service (HTDS) working only when you had dynamic Execution Object Model (XML XOM). However, in IBM Operational Decision Manager (ODM) 8, IBM enables you to still use the HTDS when have static XOM (java XOM).

Though this is good for the projects where XOM was defined in Java and Rules needs to be exposed as Web-Service, in ODM 8 we don’t have to write custom web-service code for exposing Rules as Web-Service if you have Java XOM. This saves a significant amount of time, however, there are still some changes you can make if you have Java XOM to make it work with HTDS in ODM 8 seamlessly.

When you deploy your Rules and Java XOM in RES internally, your Java XOM gets converted to XML format and then it gets exposed through the WSDL to call your Rules as Web-Service.

The caveat here is when the conversion from Java XOM to XML format happens inside RES for HTDS you need to tell the engine how XML will get generated. For example if you have List and Date in your XOM you need to generate XML elements accordingly and change the date format from Java util date to XML date.

As we know there isn't a unique correct mapping between Java Object structures and XML documents. We will need to use Java XML binding implementations to provide ways to fine tune the mapping.

For converting your Java XOM seamlessly into HTDS and working with Web-Service (Request/Response) you will have to use JAXB annotations to tell the engine the specific format you want your XML to be generated.

Below is the code how you should be annotating your List in Java XOM.

Let’s say you have your Java code for List defined below:

List yourObjectList = new ArrayList();

In the getter method you need to have your annotations defined like below.



Note: You can give any string for your name (name = "YourObjects") in the annotation but whatever you define here will be part of your generated XML element in your Request/Response, so choose name as per your requirement. Some like it with List at the end while others prefer something Plural, like YourObjects.

When you are dealing with the Date in Java XOM in HTDS you need to have XmlAdapter class extended and then write your own class like below:

Then, wherever you get methods for your date, you should annotate that as indicated below:


This will make sure the conversion from Java-XOM to XML format happening seamlessly.


Alok Keshri, a Technology Manager for Prolifics focusing mainly on ILOG-JRules and ODM space, has over twelve years experience in IT industry spanning across various industry, He is experienced and resourceful enterprise software Tech Lead/Architect and has worked with J2EE/ SOA/BRMS/ BPM technology. He is specialized in Object Oriented Analysis, Design, Development, and Estimation of project with heterogeneous Web technologies in complex business systems based on variety of platforms, he has extensive experience in IBM middleware products such as WebSphere Application Server, WebSphere ILog-JRules, WebSphere Message Broker and other products in the IBM BPM stack. He is certified in WebSphere ILog-JRules and Filenet, Alok received his BE in Electrical Engineering in year 2000.

Wednesday, June 19, 2013

Project Showcase - Prolifics Delivers IBM WebSphere DataPower Solution to Office Supplies Company

At IBM Impact, Prolifics co-hosted a session with our customer that showcased a recent IBM WebSphere DataPower solution. Learn more about how our experts were able to help our customer improve security and achieve full Payment Card Industry (PCI) compliance.

Prolifics' customer has been serving business customers for over 20 years, offering everything from office supplies to technology products to furniture and more. Dedicated to improving customer experiences, the Company embarked on a growth strategy in 2012 aimed at uniting the Web and brick and mortar stores through a corporate revamp. In order to keep personal information secure, company leaders also looked to improve security and policy enforcement for their website as a critical component of supporting this new strategy. Further, the Company was driven to comply with the Payment Card Industry (PCI) Data Security Standard (DSS). With this in mind, the Company began to explore WebSphere DataPower to address their security concerns. Business and IT leaders were confident that DataPower would enable better, faster security and a higher level of compliance for their website. Seeking the expertise of a trusted partner, the Company engaged Prolifics to implement the solution, demonstrating DataPower's ability to help the meet compliance regulations far more quickly and efficiently than before. The appliance now offers front line defense for inbound and outbound internet traffic by acting as a Web facing application firewall and isolates information for further security.

To learn more about Prolifics' DataPower solutions, email solutions@prolifics.com or visit www.prolifics.com.

Monday, May 20, 2013

Prolifics' AJ Aronoff on Healthcare Infrastructure Demands

AJ Aronoff is a Practice Director and IBM Champion covering Application Infrastructure at Prolifics. Recently at IBM Impact, AJ was interviewed on current infrastructure demands in the healthcare industry. Specializing in solutions for the healthcare industry, AJ explains how he can help the industry understand the latest healthcare regulations. In this video, he discusses the value that IBM WebSphere Application Server (WAS) brings to the industry, and the latest advancements in WAS 8.5. In addition, AJ explains the impact mobile technology is having on the healthcare space, opening up the possibility to cut down on a lot of the paperwork as well as allowing doctors and medical staff to record information at each bedside.

View AJ's interview on Healthcare Infrastructure Demands!


Interested in learning more? Contact AJ Aronoff today - aj@prolifics.com.

To learn more about Prolifics, visit www.prolifics.com.


AJ Aronoff is the Application Infrastructure Practice Director for Prolifics and an IBM Champion for IBM WebSphere. AJ first joined Prolifics as a Developer, then specialized in WebSphere MQ. He has 25 years experience in the IT field — 17 of those years at Prolifics. As a Prolifics consultant, he has done MQ design, implementation, infrastructure, monitoring and security assignments at several large financial, insurance, retail and communication firms (Bloomberg, Credit Suisse, Deutsche Bank, DTCC, Fidelity, ITG, JPMC, Och Zif, Tokyo Marine, Pep Boys and British Telecom). He has presented on security and infrastructure at Impact, Hursley comes to Minneapolis and Palisades, and MQ User Groups. His customers use Omegamon to monitor over a thousand systems across the globe.

Application Performance Testing Best Practices

Application Scalability:
  1. Define Application Scalability in two ways:    a. Vertical         b. Horizontal
  2. Vertical scalability is defined as the scalability of an application as additional CPUs are added to the same server
  3. Horizontal scalability is defined as the scalability of an application as additional servers are added to the environment
  4. Scalability of an application is important for large enterprise Portal deployments where an individual server cannot support the anticipated load. In such cases, the Portal provides the capability to cluster multiple instances of the application. This can be achieved by either installing multiple instances of the Portal on a single server with a large number of CPUs or having each instance run in a separate Java Virtual Machine (JVM), a combination of both
  5. Performance improvements are depends an operating system, application server and JVM handle the scheduling of threads across a larger number of CPUs. This is also subject to change as subsequent versions of JVMs are released to market. In addition, careful tuning of the application server is required to ensure that the load on a single instance can be most effectively supported (e.g., Web Sphere Web Container Thread Pool parameters)
  6. For horizontal scalability, the only limitation on the scalability of the Portal will most likely be from the network or database perspective (the database is a shared resource between all instances of the Portal in a particular cluster). Any other system component that is shared by all Portal instances (e.g., LDAP Directory server) can potentially create a bottleneck and prevent further scaling of the application.
  7. Portal instances running in a clustered environment it is required Portal network throughput is below 70% of the maximum network bandwidth available at peak periods (i.e. maximum of 8 MB/sec on a 100 Mb/sec network). Network throughput levels above this will impact the average response times for end users during these periods.
  8. Database CPU utilization of above 75% at peak periods can greatly impact the Portal performance depending on the types of queries being executed at that time

Web Server:
  1. If a web server (e.g., Apache, Sun ONE, IIS) is using proxy to accept request from the application server in a production environment, it is recommended to host the web server on a separate physical server. Such a configuration may be encountered in an Internet deployment with a DMZ (Demilitarized Zone)
  2. The net effect of this is that more resources, in the form of memory and CPU, will be available to the application server to process requests. This will have a small impact on the response times due to an additional network hop but will increase the overall throughput of the application due to the additional CPU resources that will be available

Application Server:
Tuning parameters for Web Sphere & Apache Tomcat application servers
WebSphere:
  1. The optimal number for the Web Container Thread Pool Maximum Size is 75 (default is 50). Both lower and higher values result in slightly lower throughput.
  2. Increasing Thread Pool Minimum Size has adverse effect on performance.
  3. Similarly, increasing Thread Inactivity Timeout has a slightly negative effect on performance.
  4. Running Performance Monitoring Service introduces about 30% overhead
Apache Tomcat:
The development time issues relate to how the Java code for the Web application was designed and implemented. Again, there is a whole set of implementation best practices surrounding this area, such as:
  1. Do not create sessions for JSPs if they are not required
  2. Do not store large objects in your session.
  3. Time out sessions quickly, and invalidate your sessions when you are done with them.
  4. Use the right scope for objects.
  5. Use connection pooling for improving performance.
  6. Cache static data.
  7. Use transfer objects to minimize calls to remote services.
  8. Minimize logging from Web applications, or use simple logging formats
Database Server:
For large enterprise deployments of the Portal, it is recommended that a dedicated database server be used to host the Portal schema. In addition, certain database subsystems (e.g., MetaStore, Users) can be further distributed across several database servers. Deploying the Portal schema in this manner will help to ensure that the database will not become a bottleneck as the Portal is scaled to support a continually growing user base
Oracle 11g:
To assist in the rollout, build a list of tasks that increase the chance of optimal performance in production and enable rapid debugging of the application. Do the following:
  1. When you create the control file for the production database, allow for growth by setting MAXINSTANCES, MAXDATAFILES, MAXLOGFILES, MAXLOGMEMBERS, and MAXLOGHISTORY to values higher than what you anticipate for the rollout. This technique results in more disk space usage and larger control files, but saves time later should these need extension in an emergency
  2. Set block size to the value used to develop the application. Export the schema statistics from the development or test environment to the production database if the testing was done on representative data volumes and the current SQL execution plans are correct
  3. Set the minimal number of initialization parameters. Ideally, most other parameters should be left at default. If there is more tuning to perform, then this appears when the system is under load
  4. Be prepared to manage block contention by setting storage options of database objects. Tables and indexes that experience high INSERT/UPDATE/DELETE rates should be created with automatic segment space management. To avoid contention of rollback segments, use automatic undo management
  5. All SQL statements should be verified to be optimal and their resource usage understood
  6. Validate that middleware and programs that connect to the database are efficient in their connection management and do not logon or logoff repeatedly
  7. Validate that the SQL statements use cursors efficiently. The database should parse each SQL statement once and then execute it multiple times. The most common reason this does not happen is because bind variables are not used properly and WHERE clause predicates are sent as string literals. If you use precompiles to develop the application, then make sure to reset the parameters MAXOPENCURSORS, HOLD_CURSOR, and RELEASE_CURSOR from the default values before precompiling the application
  8. Validate that all schema objects have been correctly migrated from the development environment to the production database. This includes tables, indexes, sequences, triggers, packages, procedures, functions, Java objects, synonyms, grants, and views. Ensure that any modifications made in testing are made to the production system
  9. As soon as the system is rolled out, establish a baseline set of statistics from the database and operating system. This first set of statistics validates or corrects any assumptions made in the design and rollout process 

Monday, May 13, 2013

Showcasing Effecta at SAPPHIRE NOW - Prolifics Tool for Test Automation & Data Validation

Prolifics kicks off SAPPHIRE NOW & ASUG Annual Conference this week in Orlando. At the conference, we will be showcasing our SAP testing solutions and share recent success stories with customers like McKesson and Pacific Coast. At booth #2715, we will also showcase a demo of our Test Automation and Data Validation tool - Effecta.

Get a sneak peek of Effecta by viewing the demo below.

Learn more about our SAP testing solutions for our customers here:
McKesson Transforms Business with Script-less Test Automation
Pacific Coast Builds SAP Test Automation Strategy with Effecta

Effecta Demo


To learn more about Prolifics' week at SAPPHIRE NOW, visit: http://www.prolifics.com/sapphirenow-2013.htm

Thursday, May 9, 2013

BPM in the Music Industry - IBM Impact Presentation Replay

At IBM Impact, I had the opportunity to co-present with one of Prolifics' BPM clients Broadcast Music, Inc. The exciting thing about this presentation is that it we introduced BPM in the context of the dynamic and rapidly changing music industry. With new digital music outlets such as Pandora, Hulu and others, music is no longer confined to vinyl albums, CDs or traditional venues like live concerts, karaoke bars or radio. Music is truly global and ubiquitous which presents interesting new process challenges for those in the music business.

Take a closer look by viewing my presentation from Impact.


BMI: Innovating the Music Industry with BPM from Prolifics

Want to learn more? Connect with me!
Email - hwebb@prolifics.com
LinkedIn - Howard Webb

For more information about Prolifics, visit: www.prolifics.com


Howard Webb is a Director of Prolifics' BPM Advisory Services. Howard and his team provides consulting and guidance to clients in transitioning to highly efficient Process Managed business models, and equips them for success in their BPM initiatives. For over 25 years he has been a consultant, trainer, facilitator, and speaker on the topics of Business Process Management (BPM), data architecture, and project management. He founded the Midwest BPM Users Group and has published articles on BPM and enterprise architecture. Prior to coming to Prolifics Howard was founder and partner of Bizappia, a consulting and services firm focused on business agility, performance and innovation. Prior to Bizappia, he was a Sr. BPM Technical Specialist with IBM.