In today’s world of empowered individuals, world leaders and businesses must use potent new technologies and harness social media to organize themselves, to show authenticity, fairness, transparency and good faith.
Social can be categorized broadly into: socializing with your external subscribers and socializing with your own employees. In order to maintain a healthy relationship and get active feedback, a system that enables subscribers to provide an honest opinion that lets you address the issues and concerns in a timely manner is critical to business. Facebook, G+, Twitter, and Foursquare are established open platforms that can very well engage these external subscribers. Another way to engage and socialize with your subscribers is through a secured and controlled platform, your own corporate portal, which empowers your subscriber with information that they need, when they need it and how they need it.
Social engagement within an organization can be looked upon as socializing between the employees, departments, channels etc. The synergy that you can generate through a true social enterprise can show dramatic ROIs on your record books. IBM Connections is one such product that helps organizations create an internal platform that they can leverage to manage internal and external social networks. Connection's mobile capabilities ensure that employees are connected 24/7 across the globe providing the best possible value to subscribers. In conventional systems, information flows via emails; appreciations, issues and suggestions are shared in a closed loop. However, with Connections they are discussed openly, bringing in the necessary empowerment and transparency to work culture.
Discussing a business requirement with one of my customers, I realized how the Social aspects between External subscribers and Internal Employees could be bridged using an IBM Connections platform. The customer wanted to revolutionize information exchange between their subscribers and customer service representatives. As subscribers login to the Web Portal, they would receive a highly personalized set of alerts, links to blogs and content relevant to their preferences. The subscriber can ask questions, share their concerns or any other appropriate information including rich media with the customer service representative. As we worked with the customer through this requirement to identify relevant COTS Products, IBM Connections came in as the closest fit. Given the social value provided by IBM Connections, the decision was an easy one for the customer to make.
A social approach to solving common business problems can help businesses optimize their processes, bringing in necessary transparency to their subscribers and employees. It also adds a different dimension to the traditional delivery approach by promoting an active engagement between various stakeholders.
To learn more about Prolifics' social business solutions, click here or visit our website at www.prolifics.com.
Prabhakar Goel is a Solution Architect with Prolifics and a key member of highly specialized team of IBM WebSphere experts involved in architecture and delivery of Business Integration solutions, High transaction commercial portals and Solutions based on Open source technologies. He is an expert providing End-to-End SOA implementations utilizing IBM suite of products like Portal, Connections, Content Management, Process Server. He holds Masters in Software Systems from BITS, Pilani and a Graduate Degree in Electrical Engineering from Kurukshetra.
Wednesday, December 28, 2011
A Social Approach to Solving Business Problems
Thursday, December 22, 2011
Why I think Lotusphere is a Great Event
It’s that time of the year again. As we enter the holiday season, I’m already looking forward to starting the New Year with an overview of the latest industry trends and learning about new technology opportunities.
I always felt that it would be nice if someone could give me a quick session and bring me up to speed with technological changes that happened over the past year; what others did with technology and how it helped them, some hands on labs where I could touch and feel all these new technologies in order to increase my confidence when recommending a solution. Lotusphere is just about that for me - it’s a dream come true; exactly what an IT Manager, Decision Maker, Architect, Developer would love to start the year with.
Lotusphere offers Jumpstart sessions to know and understand new products and get a personal touch and feel for them. You get to look out for new technologies offered, their business value, implications, and related opportunities.
In short, Lotusphere features:
Finally, the amount of business opportunities Lotusphere presents for everyone is unlike any other show. I have met many of my current customers at previous Lotusphere conferences and had opportunities to interact with them and share how the latest technologies can be leveraged to solve even their toughest business challenges. I make sure to bring many of my customers to Lotusphere so that they can gain knowledge on new trends and technologies. It makes it much easier for me to discuss these new technologies and apply them to their current business problems.
And for me the fun just doesn’t end there; it’s a welcome break from the snow and freezing temperatures of Colorado to spend a week in the warm setting of Orlando!
Prabhakar Goel is a Solution Architect with Prolifics and a key member of highly specialized team of IBM WebSphere experts involved in architecture and delivery of Business Integration solutions, High transaction commercial portals and Solutions based on Open source technologies. He is an expert providing End-to-End SOA implementations utilizing IBM suite of products like Portal, Connections, Content Management, Process Server. He holds Masters in Software Systems from BITS, Pilani and a Graduate Degree in Electrical Engineering from Kurukshetra.
I always felt that it would be nice if someone could give me a quick session and bring me up to speed with technological changes that happened over the past year; what others did with technology and how it helped them, some hands on labs where I could touch and feel all these new technologies in order to increase my confidence when recommending a solution. Lotusphere is just about that for me - it’s a dream come true; exactly what an IT Manager, Decision Maker, Architect, Developer would love to start the year with.
Lotusphere offers Jumpstart sessions to know and understand new products and get a personal touch and feel for them. You get to look out for new technologies offered, their business value, implications, and related opportunities.
In short, Lotusphere features:
- Sessions that provide a comprehensive, step-by-step detail on specific technologies
- Sessions that provide practical solutions that others have used for solving complex problems of new technologies
- Sessions that cover a breadth of topics from core development skills to those needed to build social applications
- Sessions that talk about the latest features and product capabilities, deployment techniques
- Sessions on product architecture specifically helpful in planning future data center needs and possible opportunities in that area for being creative
Finally, the amount of business opportunities Lotusphere presents for everyone is unlike any other show. I have met many of my current customers at previous Lotusphere conferences and had opportunities to interact with them and share how the latest technologies can be leveraged to solve even their toughest business challenges. I make sure to bring many of my customers to Lotusphere so that they can gain knowledge on new trends and technologies. It makes it much easier for me to discuss these new technologies and apply them to their current business problems.
And for me the fun just doesn’t end there; it’s a welcome break from the snow and freezing temperatures of Colorado to spend a week in the warm setting of Orlando!
Prabhakar Goel is a Solution Architect with Prolifics and a key member of highly specialized team of IBM WebSphere experts involved in architecture and delivery of Business Integration solutions, High transaction commercial portals and Solutions based on Open source technologies. He is an expert providing End-to-End SOA implementations utilizing IBM suite of products like Portal, Connections, Content Management, Process Server. He holds Masters in Software Systems from BITS, Pilani and a Graduate Degree in Electrical Engineering from Kurukshetra.
Thursday, December 15, 2011
What I Am Looking Forward to at Lotusphere 2012
The main theme of Lotusphere 2012 is ‘Social, Mobile and Cloud’. I would like to use the acronym MoSCo for the same. 2011 represented a key year in terms of a key shift from PC to Mobile world i.e. the year in which Smartphones and tablets will out ship PCs for the first ever time. Consumers no longer use Desktop as the sole channel to access to Web and this opens a window of opportunities in the Mobile and Cloud space.
Lotusphere has over 300 sessions scheduled over one week’s time and it would be a truly overwhelming task trying to come up with a schedule. The complete list of sessions can be viewed at:
http://ibmtvdemo.edgesuite.net/software/lotus/lotusphere/lssessions.pdf
Each session is unique and a lot of homework needs to be done to come up with a schedule that caters to individual needs. Since I’m specifically interested in developments in the mobile space particularly enterprise adoption and multi channel delivery I’m looking forward to the following sessions:
Finally, I can’t wait for the Wednesday Night Party, a must go for attendees and Blogger Open on Thursday afternoon. There is so much more at Lotusphere than all the wonderful sessions.
Laks Sundararajan is a Solution Architect with Prolifics and a key member of highly specialized team working on IBM WebSphere Portal, Content Management and Collaboration technologies. He has led the implementation of many global projects using IBM WebSphere Portal and has extensive background in design and development of enterprise portals. He specializes in providing Enterprise SOA solutions leveraging WebSphere Portal, Content Management, Tivoli and Mashup’s. He holds Masters in Information Technology from Carnegie Mellon University and a Graduate Degree in Engineering from BITS, Pilani.
Lotusphere has over 300 sessions scheduled over one week’s time and it would be a truly overwhelming task trying to come up with a schedule. The complete list of sessions can be viewed at:
http://ibmtvdemo.edgesuite.net/software/lotus/lotusphere/lssessions.pdf
Each session is unique and a lot of homework needs to be done to come up with a schedule that caters to individual needs. Since I’m specifically interested in developments in the mobile space particularly enterprise adoption and multi channel delivery I’m looking forward to the following sessions:
- SPN204 Harnessing the Power of Enterprise Mobility
- BP210 From Desktop to Mobile and Smart Phones – Lessons Learned: Taking Web Applications Mobile
- AD301 Developing Exceptional Mobile and Multi-Channel Applications using IBM Web Experience Factory
- AD302 Deliver Rich Mobile Experiences with IBM WebSphere Portal Mobile Theme
- ID228 What's New in IBM Connections Mobile
- SPN213 The Future of Social Business: A Faculty Panel Discussion
- SPN207 Securing and Connecting the Cloud: Insider Tips on Successful Cloud Adoption Strategies
- INV302 Strategy in Action: IBM Mobile for Social Business
- BP105 tick.tick.tick.tick. #! It's time to Evangelize, Educate and Energize your users to Get Productive, Get Social and Do BUSINESS!
- GEEK101 SpeedGeeking!
Finally, I can’t wait for the Wednesday Night Party, a must go for attendees and Blogger Open on Thursday afternoon. There is so much more at Lotusphere than all the wonderful sessions.
Laks Sundararajan is a Solution Architect with Prolifics and a key member of highly specialized team working on IBM WebSphere Portal, Content Management and Collaboration technologies. He has led the implementation of many global projects using IBM WebSphere Portal and has extensive background in design and development of enterprise portals. He specializes in providing Enterprise SOA solutions leveraging WebSphere Portal, Content Management, Tivoli and Mashup’s. He holds Masters in Information Technology from Carnegie Mellon University and a Graduate Degree in Engineering from BITS, Pilani.
The 3 levers of Business Agility in Action. Prolifics Weighs In!
In October, IBM hosted a Business Agility Executive Forum in New York City that was streamed live around the world. The event was focused on how organizations can transform their businesses for continued growth in today’s marketplace. During this information-rich forum, those of us in the audience and streaming the video live, were able to hear how successful companies are improving agility through interconnected business processes supported by a flexible infrastructure.
The Business Agility launch was focused on three key levers – decision management, process management and technologies. Another key theme of the event was how organizations should “Think Big, Start Small, Scale Fast.” All of these themes resonated well with Prolifics and our customer, IDP, who was the keynote client speaker at the event in NYC. Prolifics also had the opportunity to participate, as our VP of BPM and Connectivity, Anant Gupta, was chosen to speak on the panel with IDP. We were pleased to be a part of such a highly informative and interactive panel discussion.
I’d like to discuss IDP’s solution in greater depth, which is a great example of the three levers in action.
Achieve better decisions driven by analytics and business rules - IDP incorporates a sophisticated business rules engine (via ILOG JRules) which is insurance carrier specific and involve algorithms that determine policy rates in real time. Not only does JRules provide the complex logic behind the ratings, but its ease of use allows business analysts, not programmers, to create the rules for each carrier. Another key feature is the governance around the rules. Each configuration can be versioned, and the auditability feature means that it is easy to show compliance with any regulatory needs.
Take a smarter approach to process and integration - IDP integrates data through its usage of IBM WebSphere Enterprise Service Bus across the platform and can deliver data via Web 2.0 on desktops and mobile devices. IBM WESB mediates between the Agent Portal and the third-party Web services carriers use, such as credit reports, payment processing, and claim history repositories. The beauty of the enterprise service bus is that it enables each carrier to link to Web services without affecting the core SaaS solution. No matter how customized the user experience is, the SaaS solution doesn’t have to know anything about the back-end systems it is calling.
Accelerate application, service and information delivery with SOA and extend reach to cloud and mobile – IDP’s solution is a SaaS-based insurance agent portal. Since SaaS is administered by the provider on a one-to-many basis, it’s easier for the provider while also freeing carriers and agents from software administration. Additionally, SaaS is future-proof by scaling and allowing for growth. IDP also offers 24/7 access from anywhere in the world thanks to its mobile component which requires zero architectural changes to the platform. That means that companies can immediately increase efficiency by having insurers, agents and customers all have access to the tools they need and be self-serving.
Another key theme at the launch, “Think Big, Start Small, Scale Fast,” is a philosophy that Prolifics has always adopted helping customers move toward SOA and BPM. In previous years we won the Impact award for work that we did at customer, Equinox Fitness. I recall for that account we created a 3-year strategic plan and roadmap for them, yet we created quick wins to help them realize value immediately while moving them toward fulfilling their long term vision. This is definitely an approach that I would recommend, combining both strategic and tactical activities.
For a video replay of the Business Agility Executive Forum, please click here. I would also like to share some photos from the event. You can read about Prolifics' solution at IDP in this case study. To learn more about how Prolifics is helping companies with their SOA and BPM initiatives, visit our website: www.prolifics.com.
Devi Gupta, Vice President, Marketing for Prolifics
The Business Agility launch was focused on three key levers – decision management, process management and technologies. Another key theme of the event was how organizations should “Think Big, Start Small, Scale Fast.” All of these themes resonated well with Prolifics and our customer, IDP, who was the keynote client speaker at the event in NYC. Prolifics also had the opportunity to participate, as our VP of BPM and Connectivity, Anant Gupta, was chosen to speak on the panel with IDP. We were pleased to be a part of such a highly informative and interactive panel discussion.
I’d like to discuss IDP’s solution in greater depth, which is a great example of the three levers in action.
Achieve better decisions driven by analytics and business rules - IDP incorporates a sophisticated business rules engine (via ILOG JRules) which is insurance carrier specific and involve algorithms that determine policy rates in real time. Not only does JRules provide the complex logic behind the ratings, but its ease of use allows business analysts, not programmers, to create the rules for each carrier. Another key feature is the governance around the rules. Each configuration can be versioned, and the auditability feature means that it is easy to show compliance with any regulatory needs.
Take a smarter approach to process and integration - IDP integrates data through its usage of IBM WebSphere Enterprise Service Bus across the platform and can deliver data via Web 2.0 on desktops and mobile devices. IBM WESB mediates between the Agent Portal and the third-party Web services carriers use, such as credit reports, payment processing, and claim history repositories. The beauty of the enterprise service bus is that it enables each carrier to link to Web services without affecting the core SaaS solution. No matter how customized the user experience is, the SaaS solution doesn’t have to know anything about the back-end systems it is calling.
Accelerate application, service and information delivery with SOA and extend reach to cloud and mobile – IDP’s solution is a SaaS-based insurance agent portal. Since SaaS is administered by the provider on a one-to-many basis, it’s easier for the provider while also freeing carriers and agents from software administration. Additionally, SaaS is future-proof by scaling and allowing for growth. IDP also offers 24/7 access from anywhere in the world thanks to its mobile component which requires zero architectural changes to the platform. That means that companies can immediately increase efficiency by having insurers, agents and customers all have access to the tools they need and be self-serving.
Another key theme at the launch, “Think Big, Start Small, Scale Fast,” is a philosophy that Prolifics has always adopted helping customers move toward SOA and BPM. In previous years we won the Impact award for work that we did at customer, Equinox Fitness. I recall for that account we created a 3-year strategic plan and roadmap for them, yet we created quick wins to help them realize value immediately while moving them toward fulfilling their long term vision. This is definitely an approach that I would recommend, combining both strategic and tactical activities.
For a video replay of the Business Agility Executive Forum, please click here. I would also like to share some photos from the event. You can read about Prolifics' solution at IDP in this case study. To learn more about how Prolifics is helping companies with their SOA and BPM initiatives, visit our website: www.prolifics.com.
Devi Gupta, Vice President, Marketing for Prolifics
Thursday, November 17, 2011
IBM Security Portfolio
Ever wanted a quick, no nonsense explanation of what IBM security products are and where they came from? Well, here it is:
Alex Ivkin is a senior IT Security Architect with a focus in Identity and Access Management at Prolifics. Mr. Ivkin has worked with executive stakeholders in large and small organizations to help drive security initiatives. He has helped companies succeed in attaining regulatory compliance, improving business operations and securing enterprise infrastructure. Mr. Ivkin has achieved the highest levels of certification with several major Identity Management vendors and holds the CISSP designation. He is also a speaker at various conferences and an active member of several user communities.
- ISS – network and host security (X-Force acquisition)
- TIM – identity management (access360 acquisition)
- TAMesso – desktop single sign on (Encentuate acquisition)
- TFIM – federation of access (homegrown)
- TDI – data transformation (Metamerge acquisition)
- TAMeb – web app access control (Dascom acquisition)
- TSIEM – security event management (Q1 Lab acquisition)
- TCIM – compliance dashboard (Consul Risk Management acquisition)
- DataPower – XML gateway
- i2 – crime prevention
- BigFix – patch management
- Guardium – database security
- Openpages – governance risk and compliance
- Algorithmics – financial risk management
Alex Ivkin is a senior IT Security Architect with a focus in Identity and Access Management at Prolifics. Mr. Ivkin has worked with executive stakeholders in large and small organizations to help drive security initiatives. He has helped companies succeed in attaining regulatory compliance, improving business operations and securing enterprise infrastructure. Mr. Ivkin has achieved the highest levels of certification with several major Identity Management vendors and holds the CISSP designation. He is also a speaker at various conferences and an active member of several user communities.
Monday, October 24, 2011
Self Service Applications and Case Management
In today’s electronic world, any organization that serves its customers should have a means of communication to provide higher levels of service. Gone are those days where customers use to make phone calls or visit a customer service center to get their work done. Today customers are using electronic mediums like PCs, mobile devices, tablets, kiosks, to perform their tasks. Need therefore arises to build self service applications that are intuitive and time sensitive to information that customers need.
IBM Case Manager is an enterprise case management system that provides a 360 degree view of any case that is being worked upon. Information flow to and from the case management system can be from multiple sources. Case Management systems are predominantly viewed as an internal application to be used by knowledge workers and the management in an organization. Customers normally do not have access Case Management system. In order to provide a seamless integration between the customers and the Case Management system a self service application is required. Self Service Applications could be internet enabled web applications that have seamless connection to the Case Management System. The IBM Case Manager provides industry standard interfaces to enable such Self Service Applications to communicate with the Cases and their data.
Case Study:
Our customer, a large city organization, is in the process of modernizing its systems to provide better services to its members. As part of this Modernization effort, Prolifics is helping this customer in building an enterprise case management solution leveraging IBM Lotus Forms to capture the member inputs in electronic format and submit it to IBM Case Manager for further processing. Members can also view the status of the case using the Self Service Portal deployed in IBM WebSphere.
Benefits of Member Services Self Service Application:
Members can log on to submit their request over the web either from PC or other web enabled devices like smart phones, tablets, touch pads, etc.
Members can have a 360 degree view of their service requests and collaborate with their service provider for processing their requests
The turnaround time for processing member service requests is reduced from weeks (using paper request) to few days (using electronic forms)
Technologies Used:
IBM Case Manager 5.0, IBM Lotus Forms, IBM ILOG JRules Engine, IBM WebSphere Application Server, IBM Cognos Now, IBM FileNet P8 5.0 Platform, DB2, Red Hat Linux
Kiru Veerappan is a senior ECM Consultant with 15 years of Software Development and Management experience. He has been working on Enterprise Content Management and Business Process Management solutions for more than 10 years. He has created unique solutions while mentoring team members in solving real business issues in a timely and cost effective manner using the latest technologies. He believes in interacting with clients not just to deliver a piece of software but acting as an agent for change, delivering ideas while gathering requirements and providing real knowledge transfer. He has specialized in Content and Workflow Management solutions using IBM FileNet suite of products.
IBM Case Manager is an enterprise case management system that provides a 360 degree view of any case that is being worked upon. Information flow to and from the case management system can be from multiple sources. Case Management systems are predominantly viewed as an internal application to be used by knowledge workers and the management in an organization. Customers normally do not have access Case Management system. In order to provide a seamless integration between the customers and the Case Management system a self service application is required. Self Service Applications could be internet enabled web applications that have seamless connection to the Case Management System. The IBM Case Manager provides industry standard interfaces to enable such Self Service Applications to communicate with the Cases and their data.
Case Study:
Our customer, a large city organization, is in the process of modernizing its systems to provide better services to its members. As part of this Modernization effort, Prolifics is helping this customer in building an enterprise case management solution leveraging IBM Lotus Forms to capture the member inputs in electronic format and submit it to IBM Case Manager for further processing. Members can also view the status of the case using the Self Service Portal deployed in IBM WebSphere.
Benefits of Member Services Self Service Application:
Members can log on to submit their request over the web either from PC or other web enabled devices like smart phones, tablets, touch pads, etc.
Members can have a 360 degree view of their service requests and collaborate with their service provider for processing their requests
The turnaround time for processing member service requests is reduced from weeks (using paper request) to few days (using electronic forms)
Technologies Used:
IBM Case Manager 5.0, IBM Lotus Forms, IBM ILOG JRules Engine, IBM WebSphere Application Server, IBM Cognos Now, IBM FileNet P8 5.0 Platform, DB2, Red Hat Linux
Kiru Veerappan is a senior ECM Consultant with 15 years of Software Development and Management experience. He has been working on Enterprise Content Management and Business Process Management solutions for more than 10 years. He has created unique solutions while mentoring team members in solving real business issues in a timely and cost effective manner using the latest technologies. He believes in interacting with clients not just to deliver a piece of software but acting as an agent for change, delivering ideas while gathering requirements and providing real knowledge transfer. He has specialized in Content and Workflow Management solutions using IBM FileNet suite of products.
Modernization Project Using IBM Case Manager
Business Application:
Service Purchase Plan Process
Business Challenge:
Prolifics is currently involved in a project at a large public retirement system. In an effort to modernize its infrastructure, the company wanted to ensure that the architecture is interoperable and robust by proving architecturally-significant functionality rather than demonstrate complete business functionality. Equally important is the need for the system to demonstrate that it is agile enough to be modified (e.g. associated business processes and rules) significantly faster than its UPS (Unified Pension System) counterpart. In addition, the Project must demonstrate that straight-through processing is achievable by allowing exception-free instances of processes to execute to completion without any manual intervention.
IBM Case Manager provides an installation framework where you can install IBM Case Manager in a distributed architecture where IBM Case Manager is installed on a separate system from FileNet P8. The IBM Case Manager installation program installs Case Manager Builder, Case Manager Client, the IBM Case Manager administration client, and the IBM Case Manager API.
The distributed system architecture is ideal for large production environments. The following graphic shows the typical architecture of IBM Case Manager in a distributed environment and the features that IBM Case Manager can integrate with.
Solution:
The company supports Member Service Request transactions online and risk-based quality control, giving them the ability to shorten cycle times, improve service levels, and mitigate risks across their Service Processes. This enables increased throughput and capacity with existing resources and eliminates costs associated with document shipping, inbound document processing and operations processes.
How does IBM Case Manager help our customer?
Key benefits IBM Case Manager Solution:
Khaled Moawad is a business consultant with 15+ years of experience in the field of IT. He has participated in large IT projects at multinational organizations in different fields. Khaled's business consulting experience is in IBM Enterprise Content Management, IBM FileNet, IBM Advanced Case Manager, and Lombardi Business Process Management. Khaled has excellent analytical and wide application-based process re-engineering skills, including project management expertise. During his career, he has gained a wealth of experience throughout all stages of pre-sales, implementation, support, software development and project management.
Service Purchase Plan Process
Business Challenge:
Prolifics is currently involved in a project at a large public retirement system. In an effort to modernize its infrastructure, the company wanted to ensure that the architecture is interoperable and robust by proving architecturally-significant functionality rather than demonstrate complete business functionality. Equally important is the need for the system to demonstrate that it is agile enough to be modified (e.g. associated business processes and rules) significantly faster than its UPS (Unified Pension System) counterpart. In addition, the Project must demonstrate that straight-through processing is achievable by allowing exception-free instances of processes to execute to completion without any manual intervention.
IBM Case Manager provides an installation framework where you can install IBM Case Manager in a distributed architecture where IBM Case Manager is installed on a separate system from FileNet P8. The IBM Case Manager installation program installs Case Manager Builder, Case Manager Client, the IBM Case Manager administration client, and the IBM Case Manager API.
The distributed system architecture is ideal for large production environments. The following graphic shows the typical architecture of IBM Case Manager in a distributed environment and the features that IBM Case Manager can integrate with.
Solution:
- Using an existing business process, Purchase Service Request, we designed and built the environment, which will be established on a VM configuration. This includes the installation of the requisite IBM Case Management, ILOG JRules, Lotus Forms, Datacap and Thunderhead software. Here is an example of the process flow and how the difference technologies in Case Manager are leveraged:
- Case Builder is used for Business Process Modeling and configuring workflows
- ILOG Rules Studio authors and tests rules in JRules that are harvested from the existing UPS system
- Develop and integrate Lotus Forms user interfaces
- Configure a Datacap batch class and release scripts to load documents into FileNet
- Define and generate XML payloads containing the data necessary for the production of member correspondence
- Create test data, test cases and validation of the testing results
- Functional and integration testing of the ICM, ILOG JRules, Lotus Forms, Datacap and Thunderhead application components
The company supports Member Service Request transactions online and risk-based quality control, giving them the ability to shorten cycle times, improve service levels, and mitigate risks across their Service Processes. This enables increased throughput and capacity with existing resources and eliminates costs associated with document shipping, inbound document processing and operations processes.
How does IBM Case Manager help our customer?
- Provides knowledge workers with a contextual environment and 360-degree case view
- Helps knowledge workers create and participate in ad hoc and structured workflows
- Delivers real-time case metrics and integrated sentiment and content analyses to streamline workloads and remediate obstacles
- Offers a business–focused design that includes interview-style interfaces for case construction and the ability to capture industry best practices in templates
- Facilitates sophisticated decision management using an integrated business rules management approach, which uses automation and dynamic business rules to simplify assessment and payment processes and easily respond to ever-changing policies and legislation
- Simplifies collaboration and boosts productivity through social software and communication
Key benefits IBM Case Manager Solution:
- Program efficacy: achieve better outcomes and results
- Employee and case worker effectiveness: handle more cases with fewer resources
- Optimal case outcomes: improve safety, cut costs and increase revenue
- Process efficiency: leverage automation wherever possible and focus on exceptions
- Compliance and visibility: manage risk and achieve compliance cost-efficiently
Khaled Moawad is a business consultant with 15+ years of experience in the field of IT. He has participated in large IT projects at multinational organizations in different fields. Khaled's business consulting experience is in IBM Enterprise Content Management, IBM FileNet, IBM Advanced Case Manager, and Lombardi Business Process Management. Khaled has excellent analytical and wide application-based process re-engineering skills, including project management expertise. During his career, he has gained a wealth of experience throughout all stages of pre-sales, implementation, support, software development and project management.
Thursday, September 29, 2011
Choosing a Messaging System: WebSphere MQ vs. WebSphere Application Server Service Integration Bus
A question that sometimes comes up in our architecture whiteboarding sessions is about the different messaging strategies that are available in Websphere Application Server. IBM developerWorks has now published a great article detailing the differences between WebSphere MQ and the Service Integration Bus that comes with WebSphere Application Server. Check it out by clicking here.
Thursday, September 15, 2011
Cyber Security in High Demand
The old adage says: "keep your friends close, but your enemies closer". In this day and age, the IT department of your organization does not have to worry about the second part. The enemies are already at the gates. And keeping them out is an increasingly challenging task.
A recent study sponsored by Juniper Networks showed that not only there has been a dramatic rise in the number of security breaches in the past year, but the targets have also gotten bigger. The CIA, the FBI, the U.S. Senate, and various state police agencies had their systems under attack. In the first half of 2011 security and data breaches have cost U.S. enterprises almost $96 billion. At this rate the cost for the whole 2011 will be almost twice as much as it was in all of 2010. Consider the fact that 2010 saw 90% of businesses compromised with least one security breach. More than 50% of the compromised businesses had at least two breaches.
Another problem is that "the gates", where the enemies are trying to get through, are everywhere now. The entry points are in the software used by employees. They are in files, emails, web apps, web sites, databases, in everything that is on the information highway. The number of incidents related to malware went up from 4 million in the first quarter of 2010 to 6 million in the first quarter of 2011. It is expected that last year's record $63 billion that companies spent on security will be $75.6 billion in 2011.
As the study showed, the enemies get smarter and the attacks get more complicated in every year. Throw all your defenses up, get every firewall ready, the host and network intrusion protection and detection system, anti-virus, anti-malware, application firewalls and it will still be not enough, because the enemies are a step ahead. The solution? "Know yourself and know your enemy" (Sun Zhu, "Art of War"). Get the right security talent on board and use the right strategy.
The correct strategy, rooted in the governance, risk management and compliance methodology can go a long way. Consider the governance, a system by which an organization controls and directs security development, as a backbone of the approach to managing security and how it relates to the business (http://www.cert.org/governance/ges.html). Then, focus on the compliance and regulations, a key to proactive defenses and enforced regulations of a company's behavior as it pertains to security for a specific nature of the business. Governance is strategic, while compliance is tactical and specific. Addressing compliance and security regulations allows business to focus on particular challenges and vulnerabilities specific to the business type and the vertical it operates in. Finally, adjust risk management, a set of technologies that address day-to-day security work, and include mature components of security such as penetration testing, application security analysis, firewalls and intrusion prevention systems. The success of the security strategy depends on the attention to all three components.
The talent is a different thing. With the increase in the demand for the security experts, in response to the increased attacks, the security talent is becoming more expensive and harder to find. So far, the number of college students with who focus on cyber-security has not been keeping up with the demand. There are even less opportunities in finding experienced security consultants who are up to par with the criminal masterminds of the security underground. Security may be on the radar for around 1.9 million people, but there are only around 346,000 fully dedicated security professionals.
There are, however, security consulting firms, like Prolifics Security Practice (http://www.prolifics.com/business-solutions-security.htm) that can help you both with the talent and the strategy. They bring the best and the brightest security personnel on site to analyze, architect, develop and implement proper defenses and policies to address modern security threats. They help set up proper strategy, so you protect the flanks, tie up the loose ends and govern smartly.
With the increasing number and the caliber of the security breaches you cannot afford to sit around and wait. Find what others are doing, go to conferences, ask consultants, bring help, but do something, because enemies are at the gate.
If you want to read more on the recent rise of the cyber attacks look here: http://articles.latimes.com/2011/jul/05/business/la-fi-hacking-security-20110705
Prolifics will be discussing cyber security in greater depth as a sponsor and speaker at the upcoming Cyber Security for Energy Delivery Conference on September 27-28. The event takes place in San Jose, CA and brings together major utility and asset owners and key government agencies from across North America. I will be co-speaking with IBM at this conference. With experience providing security solutions for the energy and utilities industry, we will be sharing our security solutions and recent case studies around ID and password management, single sign-on, directory services, Web-based authorization, federation and other areas. For more information on the Cyber Security for Energy Delivery conference, please click here.
Alex Ivkin is a senior IT Security Architect with a focus in Identity and Access Management at Prolifics. Mr. Ivkin has worked with executive stakeholders in large and small organizations to help drive security initiatives. He has helped companies succeed in attaining regulatory compliance, improving business operations and securing enterprise infrastructure. Mr. Ivkin has achieved the highest levels of certification with several major Identity Management vendors and holds the CISSP designation. He is also a speaker at various conferences and an active member of several user communities.
A recent study sponsored by Juniper Networks showed that not only there has been a dramatic rise in the number of security breaches in the past year, but the targets have also gotten bigger. The CIA, the FBI, the U.S. Senate, and various state police agencies had their systems under attack. In the first half of 2011 security and data breaches have cost U.S. enterprises almost $96 billion. At this rate the cost for the whole 2011 will be almost twice as much as it was in all of 2010. Consider the fact that 2010 saw 90% of businesses compromised with least one security breach. More than 50% of the compromised businesses had at least two breaches.
Another problem is that "the gates", where the enemies are trying to get through, are everywhere now. The entry points are in the software used by employees. They are in files, emails, web apps, web sites, databases, in everything that is on the information highway. The number of incidents related to malware went up from 4 million in the first quarter of 2010 to 6 million in the first quarter of 2011. It is expected that last year's record $63 billion that companies spent on security will be $75.6 billion in 2011.
As the study showed, the enemies get smarter and the attacks get more complicated in every year. Throw all your defenses up, get every firewall ready, the host and network intrusion protection and detection system, anti-virus, anti-malware, application firewalls and it will still be not enough, because the enemies are a step ahead. The solution? "Know yourself and know your enemy" (Sun Zhu, "Art of War"). Get the right security talent on board and use the right strategy.
The correct strategy, rooted in the governance, risk management and compliance methodology can go a long way. Consider the governance, a system by which an organization controls and directs security development, as a backbone of the approach to managing security and how it relates to the business (http://www.cert.org/governance/ges.html). Then, focus on the compliance and regulations, a key to proactive defenses and enforced regulations of a company's behavior as it pertains to security for a specific nature of the business. Governance is strategic, while compliance is tactical and specific. Addressing compliance and security regulations allows business to focus on particular challenges and vulnerabilities specific to the business type and the vertical it operates in. Finally, adjust risk management, a set of technologies that address day-to-day security work, and include mature components of security such as penetration testing, application security analysis, firewalls and intrusion prevention systems. The success of the security strategy depends on the attention to all three components.
The talent is a different thing. With the increase in the demand for the security experts, in response to the increased attacks, the security talent is becoming more expensive and harder to find. So far, the number of college students with who focus on cyber-security has not been keeping up with the demand. There are even less opportunities in finding experienced security consultants who are up to par with the criminal masterminds of the security underground. Security may be on the radar for around 1.9 million people, but there are only around 346,000 fully dedicated security professionals.
There are, however, security consulting firms, like Prolifics Security Practice (http://www.prolifics.com/business-solutions-security.htm) that can help you both with the talent and the strategy. They bring the best and the brightest security personnel on site to analyze, architect, develop and implement proper defenses and policies to address modern security threats. They help set up proper strategy, so you protect the flanks, tie up the loose ends and govern smartly.
With the increasing number and the caliber of the security breaches you cannot afford to sit around and wait. Find what others are doing, go to conferences, ask consultants, bring help, but do something, because enemies are at the gate.
If you want to read more on the recent rise of the cyber attacks look here: http://articles.latimes.com/2011/jul/05/business/la-fi-hacking-security-20110705
Prolifics will be discussing cyber security in greater depth as a sponsor and speaker at the upcoming Cyber Security for Energy Delivery Conference on September 27-28. The event takes place in San Jose, CA and brings together major utility and asset owners and key government agencies from across North America. I will be co-speaking with IBM at this conference. With experience providing security solutions for the energy and utilities industry, we will be sharing our security solutions and recent case studies around ID and password management, single sign-on, directory services, Web-based authorization, federation and other areas. For more information on the Cyber Security for Energy Delivery conference, please click here.
Alex Ivkin is a senior IT Security Architect with a focus in Identity and Access Management at Prolifics. Mr. Ivkin has worked with executive stakeholders in large and small organizations to help drive security initiatives. He has helped companies succeed in attaining regulatory compliance, improving business operations and securing enterprise infrastructure. Mr. Ivkin has achieved the highest levels of certification with several major Identity Management vendors and holds the CISSP designation. He is also a speaker at various conferences and an active member of several user communities.
Tuesday, August 16, 2011
Test Automation for SAP Packaged Applications
SAP Packaged Applications allow you to rapidly configure and customize business processes as your environment changes. To ensure the quality, performance and reliability of these applications, you need a sophisticated testing solution that can be configured and customized as quickly as your SAP landscape. In this article, we will show you how you can use your IBM® Rational® Functional Tester (RFT) toolset along with tools from IBM Ready-for-Rational partner, Arsin.
In this blog entry, I will discuss:
A Structured Approach to SAP Testing
SAP implementations pose some of the most intriguing and difficult challenges in the QA universe. The thickly netted system is extremely integrated and is typically linked to every business process in the enterprise. To tackle such an immense system, QA engineers must approach SAP applications with care.
With more than a decade of experience in testing SAP systems for a large client base in myriad industry verticals, we have developed a test maturity model assessment and improvement framework to bring about an organized and a structured approach to SAP testing. This framework has a three pronged approach, which offers process improvement, knowledge management, and test automation, as follows:
1. Process improvement. Process improvement deals with the assessment of the current Test Maturity Model and developing a plan to improve the Test Maturity Model to the next level and then implement it. A mature test process that has standardized templates, well-defined processes, clear protocols, and no bottlenecks provides for a complete and comprehensively tested SAP system. By comparing the current test maturity model with industry standards and identifying the gaps and focusing on them, test maturity can be improved.
2. Knowledge management. Knowledge management deals with institutionalizing QA knowledge collected over time. Traditional testing for SAP systems relies on the functional and technical consultants of the SAP system for subject matter expertise to deal with various instances. In this phase, test artifact libraries are built for critical business process for regression. The following test artifacts are documented:
The remainder of this discussion focuses on the test automation aspect of the Structured SAP Testing Approach. Our belief is that RFT, in conjunction with Arsin's Effecta Validation Engine, makes SAP testing thorough, comprehensive, easy, and cost effective.
Importance of Test Automation in SAP Implementations
The SAP landscape is continuously changing, as a result of changes to SAP modules from SAP, business process changes within the client’s company, changes to the system environment, changes to applications interfacing with SAP, and a multitude of regulatory compliance mandates.
In order to keep up with these changes, SAP systems must be thoroughly tested. With every change, there is a regression library of test cases that needs to be executed to ensure stability. Each test requires time and effort when executed manually; by comparison, automated test take a very small fraction of the time and effort to execute. Automation also helps makes most of the test assets reusable.
Current SAP Testing Solutions and their Limitations
The existing SAP testing model on the market today makes a very rudimentary use of automation, in terms of:
Validation: In most cases, user interface (UI) tools that are available are used to automate test execution, which is only about 25% of the total testing effort. Validation represents more than 75% of this effort, and scrubbing the data using UI test automation tools is difficult. A certain level of validation is possible through UI based test automation tools, however it takes a long time to script this validation and any change requires a lot of coding following the first implementation.
Data management: Traditionally, data used for testing is captured and maintained in spreadsheets. Searching and sorting through this data is difficult, as is maintaining the consistency of data across users and locations. This difficulty is compounded by ever increasing volumes of test data to be maintained. In addition, there is no intelligent association between SAP metadata and its corresponding test data.
Managing change: Changes to SAP implementations occur during reconfiguration or the addition of custom-built components (programs). In these situations, the scripts for automated test execution need to be changed regularly, which is difficult. Moreover, when using UI tools for automation, 75% of the effort needs to be constantly re-worked to keep up with the changes to the SAP system.
Addressing these Limitations
The limitations above described call for a new solution that can address these issues. We offer a complete and scalable testing solution that combines Arsin Effecta Validation Engine with IBM Rational Functional Tester.
By automating the validation of data, business processes, custom development and integrations across SAP applications, you can increase the quality of implementation, support multiple changes in their environment and mitigate business risks. Also, by eliminating manual testing you can avoid greater difficulties in production that ultimately impact the quality and performance of the business. Arsin’s Effecta Test Suite provides the benefits of a complete testing solution by automating impact analyses of changes, test data maintenance, test execution and validation.
Figure 1: Arsin Effecta Solution Architecture for testing SAP applications
Test Data Manager
Stores test data along with criteria in the Effecta database. Before executing an automated test, validity of test data is checked on the target system and system is automatically updated. If the test data no longer exists in the target system or cannot be reused, the data set update feature will help to refresh with new valid data.
Script Manager
Automatically enhances recorded scripts and eliminates need for customization. Script manager enables script less automation of IBM Rational Functional Tester.
Change Impact Manager
When changes occur in a system, Change Impact Manager automatically extracts affected objects and identifies test cases to be executed for regression testing. It also identifies objects being changed that don’t have test cases in the library.
Report Manager
Report Manager provides out-of-the-box reports for tracking test artifacts, development and test execution. Detailed test results pinpoint failed events in test case.
Test Manager
Effecta promotes reusability and repeatability with the following features:
- Ability to create Test Requirements and link them to Test Cases and development objects
- Ability to create Test Cases and link them to Test Requirements for coverage analysis
- Ability to create separate Test execution steps in the form of Test Procedures and link them to Test cases
- Defect management
- Dashboard for reporting and metrics
Validation Manager for Middleware
Simulates inbound messages at various data interchange points and validates outbound messages.
Automatically validates translations and mappings
Validation Manager for Transactional Systems
Validation Manager for Transactional Systems is a completely configurable, customizable and readily deployable validation library of components for various business processes. It significantly accelerates validation by automatically extracting the actual data created by transactions and comparing it with expected results. The Validation Manager is specifically designed to support SAP systems.
Validation Manager for BI
Tests Business Intelligence systems during initial implementation and during maintenance and support pack deployments. It also automates the validation of data loaded from multiple ERP and other systems. Provides sophisticated reporting including detailed results.
Conclusion
The benefits of using automation in SAP testing are abundant. Test automation, deployed with minimal effort, enables increased test coverage, which in turn reduces cycle time and enables efficient bug detection early in the development cycle. Since test automation is designed for reusability, routine tasks are eliminated and total cost of ownership is reduced. Test automation is far more precise and consistent, and features standardized reporting, enabling clear test analysis across the QA environment.
Sarat Addanki is the Vice President, ERP Practice. He has 18 years of experience in the ERP arena including design, development and testing of ERP implementations. He was part of a SAP Quality professionals team contributing to the design of SAP Test Accelerator TAO. He founded the ERP Division at Arsin, which focuses on developing frameworks and accelerators to ensure delivery excellence, reduce the overall cost of ownership and increase productivity in ERP implementations. The Test Accelerators he designed significantly improve the testing process, knowledge management and test automation. His division focuses on providing quality services for SAP, Oracle, PeopleSoft, Sterling, Retek and Middleware applications. His domain expertise ranges from Pharmaceutical Distribution, Hi-Tech, Manufacturing to Retail industries. He is a PMI (Project Management Institute) certified Project Management Professional (PMP). Sarat holds a bachelor's degree in Computer Science and Engineering from Osmania University, Hyderabad, India.
In this blog entry, I will discuss:
- A structured approach to SAP testing
- SAP current test automation paradigm and its challenges
- The need for a new solution for SAP test automation
- How Arsin Packaged Test Automation for SAP integrated with IBM Rational Functional Tester helps address these challenges
A Structured Approach to SAP Testing
SAP implementations pose some of the most intriguing and difficult challenges in the QA universe. The thickly netted system is extremely integrated and is typically linked to every business process in the enterprise. To tackle such an immense system, QA engineers must approach SAP applications with care.
With more than a decade of experience in testing SAP systems for a large client base in myriad industry verticals, we have developed a test maturity model assessment and improvement framework to bring about an organized and a structured approach to SAP testing. This framework has a three pronged approach, which offers process improvement, knowledge management, and test automation, as follows:
1. Process improvement. Process improvement deals with the assessment of the current Test Maturity Model and developing a plan to improve the Test Maturity Model to the next level and then implement it. A mature test process that has standardized templates, well-defined processes, clear protocols, and no bottlenecks provides for a complete and comprehensively tested SAP system. By comparing the current test maturity model with industry standards and identifying the gaps and focusing on them, test maturity can be improved.
2. Knowledge management. Knowledge management deals with institutionalizing QA knowledge collected over time. Traditional testing for SAP systems relies on the functional and technical consultants of the SAP system for subject matter expertise to deal with various instances. In this phase, test artifact libraries are built for critical business process for regression. The following test artifacts are documented:
- Test Requirements
- Test Cases
- Test Procedures
The remainder of this discussion focuses on the test automation aspect of the Structured SAP Testing Approach. Our belief is that RFT, in conjunction with Arsin's Effecta Validation Engine, makes SAP testing thorough, comprehensive, easy, and cost effective.
Importance of Test Automation in SAP Implementations
The SAP landscape is continuously changing, as a result of changes to SAP modules from SAP, business process changes within the client’s company, changes to the system environment, changes to applications interfacing with SAP, and a multitude of regulatory compliance mandates.
In order to keep up with these changes, SAP systems must be thoroughly tested. With every change, there is a regression library of test cases that needs to be executed to ensure stability. Each test requires time and effort when executed manually; by comparison, automated test take a very small fraction of the time and effort to execute. Automation also helps makes most of the test assets reusable.
Current SAP Testing Solutions and their Limitations
The existing SAP testing model on the market today makes a very rudimentary use of automation, in terms of:
Validation: In most cases, user interface (UI) tools that are available are used to automate test execution, which is only about 25% of the total testing effort. Validation represents more than 75% of this effort, and scrubbing the data using UI test automation tools is difficult. A certain level of validation is possible through UI based test automation tools, however it takes a long time to script this validation and any change requires a lot of coding following the first implementation.
Data management: Traditionally, data used for testing is captured and maintained in spreadsheets. Searching and sorting through this data is difficult, as is maintaining the consistency of data across users and locations. This difficulty is compounded by ever increasing volumes of test data to be maintained. In addition, there is no intelligent association between SAP metadata and its corresponding test data.
Managing change: Changes to SAP implementations occur during reconfiguration or the addition of custom-built components (programs). In these situations, the scripts for automated test execution need to be changed regularly, which is difficult. Moreover, when using UI tools for automation, 75% of the effort needs to be constantly re-worked to keep up with the changes to the SAP system.
Addressing these Limitations
The limitations above described call for a new solution that can address these issues. We offer a complete and scalable testing solution that combines Arsin Effecta Validation Engine with IBM Rational Functional Tester.
By automating the validation of data, business processes, custom development and integrations across SAP applications, you can increase the quality of implementation, support multiple changes in their environment and mitigate business risks. Also, by eliminating manual testing you can avoid greater difficulties in production that ultimately impact the quality and performance of the business. Arsin’s Effecta Test Suite provides the benefits of a complete testing solution by automating impact analyses of changes, test data maintenance, test execution and validation.
Figure 1: Arsin Effecta Solution Architecture for testing SAP applications
Test Data Manager
Stores test data along with criteria in the Effecta database. Before executing an automated test, validity of test data is checked on the target system and system is automatically updated. If the test data no longer exists in the target system or cannot be reused, the data set update feature will help to refresh with new valid data.
Script Manager
Automatically enhances recorded scripts and eliminates need for customization. Script manager enables script less automation of IBM Rational Functional Tester.
Change Impact Manager
When changes occur in a system, Change Impact Manager automatically extracts affected objects and identifies test cases to be executed for regression testing. It also identifies objects being changed that don’t have test cases in the library.
Report Manager
Report Manager provides out-of-the-box reports for tracking test artifacts, development and test execution. Detailed test results pinpoint failed events in test case.
Test Manager
Effecta promotes reusability and repeatability with the following features:
- Ability to create Test Requirements and link them to Test Cases and development objects
- Ability to create Test Cases and link them to Test Requirements for coverage analysis
- Ability to create separate Test execution steps in the form of Test Procedures and link them to Test cases
- Defect management
- Dashboard for reporting and metrics
Validation Manager for Middleware
Simulates inbound messages at various data interchange points and validates outbound messages.
Automatically validates translations and mappings
Validation Manager for Transactional Systems
Validation Manager for Transactional Systems is a completely configurable, customizable and readily deployable validation library of components for various business processes. It significantly accelerates validation by automatically extracting the actual data created by transactions and comparing it with expected results. The Validation Manager is specifically designed to support SAP systems.
Validation Manager for BI
Tests Business Intelligence systems during initial implementation and during maintenance and support pack deployments. It also automates the validation of data loaded from multiple ERP and other systems. Provides sophisticated reporting including detailed results.
Conclusion
The benefits of using automation in SAP testing are abundant. Test automation, deployed with minimal effort, enables increased test coverage, which in turn reduces cycle time and enables efficient bug detection early in the development cycle. Since test automation is designed for reusability, routine tasks are eliminated and total cost of ownership is reduced. Test automation is far more precise and consistent, and features standardized reporting, enabling clear test analysis across the QA environment.
Sarat Addanki is the Vice President, ERP Practice. He has 18 years of experience in the ERP arena including design, development and testing of ERP implementations. He was part of a SAP Quality professionals team contributing to the design of SAP Test Accelerator TAO. He founded the ERP Division at Arsin, which focuses on developing frameworks and accelerators to ensure delivery excellence, reduce the overall cost of ownership and increase productivity in ERP implementations. The Test Accelerators he designed significantly improve the testing process, knowledge management and test automation. His division focuses on providing quality services for SAP, Oracle, PeopleSoft, Sterling, Retek and Middleware applications. His domain expertise ranges from Pharmaceutical Distribution, Hi-Tech, Manufacturing to Retail industries. He is a PMI (Project Management Institute) certified Project Management Professional (PMP). Sarat holds a bachelor's degree in Computer Science and Engineering from Osmania University, Hyderabad, India.
Friday, July 22, 2011
Panther Applications in Croatia
Brief History
When the Prolifics application development toolset came to the Croatian market in 1990, independent software vendor company Pardus (then 4-MATE) chose it to develop a back office application for a large retailer. The character-mode JAM5 application was running on an Intel-based UNIX system, with 60+ concurrent users, the largest in the region at that time.
Based on the successful experience with the Prolifics toolset, Pardus developed another large integrated information system for retail banks. The platform was again character mode JAM5 on UNIX, with custom mechanisms for distributed database support. The system has since migrated to the recent version of Panther and is still in use today.
Pardus continued to use JAM and Panther for its own development, and started to distribute it to other Independent Software Venders (ISV) and end user organizations with their own IT staff. Programs for JAM and Panther training, consulting, project management, and end-user development team mentoring were created. This contributed to the rapid success of the tool in the Croatian market.
As a result, Panther is now used by the two largest banks in the country. One of them still uses the originally Pardus-developed software for its core data processing, supported by 70+ in-house Panther developers and a team of Pardus consultants. Other users, apart from ISV houses, include departments like the Croatian postal services, customs, health insurance, several ministries and Zagreb municipal administrations.
An Example: Forensic DNA Database
Pardus uses and encourages other fellow-developers to use Panther for a wide variety of applications. One interesting example is the Pardus-developed eQMS::DNA application, a DNA “fingerprint” database, now in use in Central Forensic Laboratories in two countries.
When the opportunity to develop such an application arrived, Pardus again chose Panther because of its excellent rapid prototyping abilities, flexibility of its scripting language and the versatility of its database transaction generator. Native XML import and export capabilities were an advantage.
The resulting eQMS::DNA application is a system primarily used for maintenance and efficient searching of database of human genotypes for forensic purposes (such as identification of biological traces like blood, hair, skin etc), but also has the capability to be used in fields such as livestock lineage tracking.
DNA fingerprinting relies on the fact that certain points in human (or other) genome (loci) change relatively quickly (display polymorphism) from generation to generation – fast enough to form a combination unique for an individual, but slowly enough to be stable within single individual's cells. The type of polymorphisms and number of loci used for constructing genotypes in eQMS::DNA is configurable, but typical installation will employ a standard set of 13 to 18 STR (short tandem repeat) loci.
The system maintains data on individual donors with optional end-user configurable personal and demographic data, multiple samples containing genetic material taken from the donor, and genotypes obtained from the samples, possibly using multiple techniques and identification kits. Both processed genotypes and optional additional data such as peak quality, confidence parameters and raw electroferograms can be kept. The system also keeps profiles of unidentified traces.
Manual entry of data to Panther screens, from plate gel electrophoresis is possible, but the typical data source results from automated capillary electrophoresis sequencers. Communication with systems such as Interpol DNA Gateway is also supported.
The searches can be performed interactively or in full automatic mode. All searches, including those using partial profiles and relaxed criteria are typically done in less than a second. The system also supports mixed-stain searches with provisions for common contaminant identification (such as genotypes of laboratory or other forensic personnel).
Interpol maintains a list of available DNA profiling systems (probably the most well known being FBI CODIS). eQMS::DNA is the only application from a commercial software developer.
New Developments
Pardus has assisted many clients in modernizing their legacy character-mode JAM and Panther applications.
For example, a Complex Card Management application for a leading Croatian bank was recently ported from JAM5 character-mode to Panther5 GUI. Initial functionality was complete within a month, with an additional month spent adding capabilities made possible by the new version of the Panther tool.
Pardus mentored several of their customers as they transitioned from character-mode to GUI to the Web environment, and from 2-tier to multi-tier architecture. One example includes developing a Java wrapper to call mainframe-based Web services from within a 2-tier GUI and Web Panther application. Another customer, a public health institution, uses the similar Pardus-provided tool to provide their clients with controlled access to their LIMS software (also developed by Pardus) that contains data on analysis of food and water samples.
Despite the market focus shifting away from dedicated application development toolsets, Panther stays a viable product in the Croatian market, thanks to the high penetration and the level of experience and expertise available to its customers.
For more info see http://dna.pardus.hr/ and http://lims.pardus.hr/.
Dragi Raos is a co-founder of Pardus d.o.o a software development and IT consulting company from Zagreb, Croatia. Pardus is a distributor of Panther and JAM in Croatia. Dragi has three decades of experience in technical and scientific computing, design and development of complex financial applications and training and coaching of development teams, he has served as team leader or technical consultant with clients ranging from International Atomic Energy Agency to large regional banks to public health institutions. Dragi's technical expertise includes database management systems, middleware, CASE tools and a wide range of development environments, including 20 years of experience with Panther and all versions of JAM.
When the Prolifics application development toolset came to the Croatian market in 1990, independent software vendor company Pardus (then 4-MATE) chose it to develop a back office application for a large retailer. The character-mode JAM5 application was running on an Intel-based UNIX system, with 60+ concurrent users, the largest in the region at that time.
Based on the successful experience with the Prolifics toolset, Pardus developed another large integrated information system for retail banks. The platform was again character mode JAM5 on UNIX, with custom mechanisms for distributed database support. The system has since migrated to the recent version of Panther and is still in use today.
Pardus continued to use JAM and Panther for its own development, and started to distribute it to other Independent Software Venders (ISV) and end user organizations with their own IT staff. Programs for JAM and Panther training, consulting, project management, and end-user development team mentoring were created. This contributed to the rapid success of the tool in the Croatian market.
As a result, Panther is now used by the two largest banks in the country. One of them still uses the originally Pardus-developed software for its core data processing, supported by 70+ in-house Panther developers and a team of Pardus consultants. Other users, apart from ISV houses, include departments like the Croatian postal services, customs, health insurance, several ministries and Zagreb municipal administrations.
An Example: Forensic DNA Database
Pardus uses and encourages other fellow-developers to use Panther for a wide variety of applications. One interesting example is the Pardus-developed eQMS::DNA application, a DNA “fingerprint” database, now in use in Central Forensic Laboratories in two countries.
When the opportunity to develop such an application arrived, Pardus again chose Panther because of its excellent rapid prototyping abilities, flexibility of its scripting language and the versatility of its database transaction generator. Native XML import and export capabilities were an advantage.
The resulting eQMS::DNA application is a system primarily used for maintenance and efficient searching of database of human genotypes for forensic purposes (such as identification of biological traces like blood, hair, skin etc), but also has the capability to be used in fields such as livestock lineage tracking.
DNA fingerprinting relies on the fact that certain points in human (or other) genome (loci) change relatively quickly (display polymorphism) from generation to generation – fast enough to form a combination unique for an individual, but slowly enough to be stable within single individual's cells. The type of polymorphisms and number of loci used for constructing genotypes in eQMS::DNA is configurable, but typical installation will employ a standard set of 13 to 18 STR (short tandem repeat) loci.
The system maintains data on individual donors with optional end-user configurable personal and demographic data, multiple samples containing genetic material taken from the donor, and genotypes obtained from the samples, possibly using multiple techniques and identification kits. Both processed genotypes and optional additional data such as peak quality, confidence parameters and raw electroferograms can be kept. The system also keeps profiles of unidentified traces.
Manual entry of data to Panther screens, from plate gel electrophoresis is possible, but the typical data source results from automated capillary electrophoresis sequencers. Communication with systems such as Interpol DNA Gateway is also supported.
The searches can be performed interactively or in full automatic mode. All searches, including those using partial profiles and relaxed criteria are typically done in less than a second. The system also supports mixed-stain searches with provisions for common contaminant identification (such as genotypes of laboratory or other forensic personnel).
Interpol maintains a list of available DNA profiling systems (probably the most well known being FBI CODIS). eQMS::DNA is the only application from a commercial software developer.
Figure 1:Screen shot of eQMS::DNA profiling application
New Developments
Pardus has assisted many clients in modernizing their legacy character-mode JAM and Panther applications.
For example, a Complex Card Management application for a leading Croatian bank was recently ported from JAM5 character-mode to Panther5 GUI. Initial functionality was complete within a month, with an additional month spent adding capabilities made possible by the new version of the Panther tool.
Pardus mentored several of their customers as they transitioned from character-mode to GUI to the Web environment, and from 2-tier to multi-tier architecture. One example includes developing a Java wrapper to call mainframe-based Web services from within a 2-tier GUI and Web Panther application. Another customer, a public health institution, uses the similar Pardus-provided tool to provide their clients with controlled access to their LIMS software (also developed by Pardus) that contains data on analysis of food and water samples.
Despite the market focus shifting away from dedicated application development toolsets, Panther stays a viable product in the Croatian market, thanks to the high penetration and the level of experience and expertise available to its customers.
For more info see http://dna.pardus.hr/ and http://lims.pardus.hr/.
Dragi Raos is a co-founder of Pardus d.o.o a software development and IT consulting company from Zagreb, Croatia. Pardus is a distributor of Panther and JAM in Croatia. Dragi has three decades of experience in technical and scientific computing, design and development of complex financial applications and training and coaching of development teams, he has served as team leader or technical consultant with clients ranging from International Atomic Energy Agency to large regional banks to public health institutions. Dragi's technical expertise includes database management systems, middleware, CASE tools and a wide range of development environments, including 20 years of experience with Panther and all versions of JAM.
Wednesday, July 13, 2011
Learn About Security: Open Authorization in Federated Applications using IBM Security Tools
IBM Tivoli Federated Identity Manager (TFIM) simplifies application integration by providing single sign on between disparate web applications, so the users do not have to share their passwords or re-enter them. TFIM uses various protocols to achieve federation, which include SAML, WS-Federation, and OpenID. Our Security LoB has been invited by IBM to participate in a beta program to implement the popular authorization protocol, OAuth. OAuth, which stands for Open Authorization, is a protocol that allows users to approve applications to act on their behalf. OAuth makes it possible to exchange critical information across distinct organizations based upon a service level agreement that states one application as an OAuth client and the other as an OAuth provider. One major benefit of the OAuth protocol is its emphasis on authorization, when compared to its alternatives. This is giving rise to a hybrid model in which our customers can combine protocols like SAML or OpenID for authentication and OAuth for authorization. OAuth, besides making the token exchange mechanism transparent to the user, provides mechanisms to define the scope which the Client could access regarding the user’s data on the Provider.
Here is a fictitious example. Imagine PFAP as a financial application dashboard developed by Prolifics that provides a user with a consolidated view of his account balances across multiple banks. First, PFAP would have to be in an agreement as an OAuth client across all of the banks, from which account information would be obtained on behalf of the user. Once an agreement is set up with each Provider, PFAP would be registered as an OAuth client to those particular banks (Providers) and so would be provided with a client ID and a shared secret for each one. This information (Client ID, Shared Secret) would help the Provider determine, if the application (Client) requesting data on behalf of user, is one of its trusted OAuth clients. Assuming an agreement between Prolifics and a leading financial firm, PFAP is one of the OAuth clients that has access to the Firm's customer data, upon approval. The first time a user logs into the PFAP application, he will be asked to add his account number to PFAP. Once the user selects “Add Account” button, the user would be redirected to the Firm's website, where he would be asked to put in his credentials. At this step a token would be requested by PFAP from the Firm in the background, which gets authorized upon user logging into the Firm's website. This action grants access to PFAP to act on the user’s behalf.
From the user’s perspective, once logged in the Firm would display a “Consent to Authorize” page where the user would needs to permit access to PFAP to act on his behalf and retrieve information within a certain scope, which in this case would be user’s account balance. Once the user agrees to permit PFAP to act on his behalf and retrieve balance information, a verifier code is sent to PFAP in the background. PFAP would then request an access token from the Firm's application sending the verifier code, Client ID, Shared Secret and few other parameters to request an Access token. The Firm would verify the Client ID and Shared Secret to determine if PFAP is one of its OAuth clients and then would verify the Verifier Code to generate an Access token. Once PFAP receives the Access token, it enables PFAP to get the user’s data on his behalf though within a permitted scope, which in this case would be the account balance. So next time the user logs in, since PFAP would already have an Access token, the user would be able to see his balance information without having to login to the Firm's website. Now, implementation of hybrid models is being thought upon, where a combination of OAuth with protocols like SAML or OpenID would help us achieve SSO at the same time. For instance, once logged into PFAP, an implementation of hybrid model would enable the user to perform other operations in the Firm's website like balance transfers, by launching a new link to the Firm without the need to login again (SSO).
Here is a fictitious example. Imagine PFAP as a financial application dashboard developed by Prolifics that provides a user with a consolidated view of his account balances across multiple banks. First, PFAP would have to be in an agreement as an OAuth client across all of the banks, from which account information would be obtained on behalf of the user. Once an agreement is set up with each Provider, PFAP would be registered as an OAuth client to those particular banks (Providers) and so would be provided with a client ID and a shared secret for each one. This information (Client ID, Shared Secret) would help the Provider determine, if the application (Client) requesting data on behalf of user, is one of its trusted OAuth clients. Assuming an agreement between Prolifics and a leading financial firm, PFAP is one of the OAuth clients that has access to the Firm's customer data, upon approval. The first time a user logs into the PFAP application, he will be asked to add his account number to PFAP. Once the user selects “Add Account” button, the user would be redirected to the Firm's website, where he would be asked to put in his credentials. At this step a token would be requested by PFAP from the Firm in the background, which gets authorized upon user logging into the Firm's website. This action grants access to PFAP to act on the user’s behalf.
From the user’s perspective, once logged in the Firm would display a “Consent to Authorize” page where the user would needs to permit access to PFAP to act on his behalf and retrieve information within a certain scope, which in this case would be user’s account balance. Once the user agrees to permit PFAP to act on his behalf and retrieve balance information, a verifier code is sent to PFAP in the background. PFAP would then request an access token from the Firm's application sending the verifier code, Client ID, Shared Secret and few other parameters to request an Access token. The Firm would verify the Client ID and Shared Secret to determine if PFAP is one of its OAuth clients and then would verify the Verifier Code to generate an Access token. Once PFAP receives the Access token, it enables PFAP to get the user’s data on his behalf though within a permitted scope, which in this case would be the account balance. So next time the user logs in, since PFAP would already have an Access token, the user would be able to see his balance information without having to login to the Firm's website. Now, implementation of hybrid models is being thought upon, where a combination of OAuth with protocols like SAML or OpenID would help us achieve SSO at the same time. For instance, once logged into PFAP, an implementation of hybrid model would enable the user to perform other operations in the Firm's website like balance transfers, by launching a new link to the Firm without the need to login again (SSO).
Tuesday, July 12, 2011
BPM Best Practices for the Financial Industry
In our current economic environment, the financial industry is challenged today by two very significant needs to improve efficiency and enhance service. I spoke about these business needs last year at an event hosted by Prolifics and IBM, and they couldn’t be more significant today. To satisfy these requirements, organizations are tasked with driving down costs by consolidating duplicated and siloed systems into well-defined, reusable services and managing customer service levels with greater flexibility.
This industry has a collection of 'habits,' or best practices, that have a powerful effect on business performance in these critical areas. Over time, we have captured the best practices that have proven to be successful with process management programs within the financial industry. At this seminar, we reviewed 11 specific practices that help financial services organizations experience success with projects/delivery, team competency and leveraging Business Process Management (BPM) across the enterprise.
I’d like to share some of these ‘habits’ with you now:
Make BPM about Productivity and Visibility
Never “One and Done”
Don’t Skip Process Analysis
Build a Complete Team
Establish the Owners
In addition, financial institutions face a highly demanding environment requiring exceeding agility. The seminar focused on how customers can reap the benefits of the business rule approach to operational decision making in the areas of payments, credit and lending, risk management and customer care for financial institutions. With business rules, key decisions in your financial processes can be changed in minutes to days rather than months - bringing new levels of efficiency to day-to-day operations.
To read more about these 11 Habits for highly successful BPM programs and the benefits of a business rules management system, please take a look at this presentation. For any questions about these topics or Prolifics’ solutions for the financial industry, please email solutions@prolifics.com.
Don Rivera is a Client Executive with Prolifics managing the NY & NJ Metro territory. Don is a certified IBM WebSphere Solution Sales Professional working with SMB and Enterprise accounts to determine how to leverage IBM software technology to meet their critical business objectives. He brings over 16 years of experience working in the information technology industry in various system engineering, sales and business development roles with companies such as Computer Sciences Corporation, Level 3 Communications and BBN Technologies.
This industry has a collection of 'habits,' or best practices, that have a powerful effect on business performance in these critical areas. Over time, we have captured the best practices that have proven to be successful with process management programs within the financial industry. At this seminar, we reviewed 11 specific practices that help financial services organizations experience success with projects/delivery, team competency and leveraging Business Process Management (BPM) across the enterprise.
I’d like to share some of these ‘habits’ with you now:
Make BPM about Productivity and Visibility
- Metrics, KPIs and SLAs should be part of the DEFINE phase
- Don’t scope out metrics
- Remember: visibility is critical to improvement
Never “One and Done”
- Iterative Approach: continuous process improvement
- Additional phases or versions will always happen: The value in BPM is that you can get your first version out there quickly, but the real opportunity here is really in version 2, 3 and 4 where you are bringing entirely new levels of capability and sophistication of efficiency of effectiveness to your organization
Don’t Skip Process Analysis
- Processes are done by many different parties! Process analysis helps you understand: What does the end-to-end look like? What data is needed at different points? What is the velocity that we need in this process? How quickly do we need turnaround time?
- Process analysis sets apart traditional applications development from building process applications
Build a Complete Team
- Have the right mix of resources on the team with a broad set of skill sets
- Java (.NET) developers aren’t all you need
Establish the Owners
- A requirement for succeeding with BPM is that processes must be business-owned. You need people from the business to engage and determine what the process priorities are.
- They key benefit to this iterative approach is that you can make tradeoffs and changes to adapt to changing business conditions and requirements. A level of business engagement will ensure that the right decisions are being made.
In addition, financial institutions face a highly demanding environment requiring exceeding agility. The seminar focused on how customers can reap the benefits of the business rule approach to operational decision making in the areas of payments, credit and lending, risk management and customer care for financial institutions. With business rules, key decisions in your financial processes can be changed in minutes to days rather than months - bringing new levels of efficiency to day-to-day operations.
To read more about these 11 Habits for highly successful BPM programs and the benefits of a business rules management system, please take a look at this presentation. For any questions about these topics or Prolifics’ solutions for the financial industry, please email solutions@prolifics.com.
Don Rivera is a Client Executive with Prolifics managing the NY & NJ Metro territory. Don is a certified IBM WebSphere Solution Sales Professional working with SMB and Enterprise accounts to determine how to leverage IBM software technology to meet their critical business objectives. He brings over 16 years of experience working in the information technology industry in various system engineering, sales and business development roles with companies such as Computer Sciences Corporation, Level 3 Communications and BBN Technologies.
Thursday, May 26, 2011
Leveraging your Panther Assets with Web Services
Software applications have become a valuable component of modern enterprises. They contain critical business knowledge, and represent significant design and development effort. It only makes sense to extract as much value from these applications as possible. As enterprises grow and merge, the need to share the information in these applications becomes imperative. This applies to your Panther applications we well. For example, order entry systems need to talk to billing systems, shipping systems, and so on.
While there are many methods for accessing your Panther applications, Web Services provides a common method, across diverse platforms, products, and computer languages, in a well-defined manner. As long as each application implements the Web Services standards, applications can freely interoperate with each other. This bi-directional communication is independent of the technology that the target application was written in.
Your Panther applications can participate in this inter-application communication by implementing Web Services, multiplying the value within them. In this way, systems throughout your enterprise, or beyond, can benefit from the existing code and data within your Panther applications.
You can also utilize your Panther tools and skills to create new RAPID Database Transactional Web Services for just about any application. This is totally independent from your existing Panther applications and utilizes the same rapid development platform.
For a complimentary Discovery Call, please call your Business Development Manager at 1 (800) 458-3313 ext 2 or email crm@prolifics.com.
While there are many methods for accessing your Panther applications, Web Services provides a common method, across diverse platforms, products, and computer languages, in a well-defined manner. As long as each application implements the Web Services standards, applications can freely interoperate with each other. This bi-directional communication is independent of the technology that the target application was written in.
Your Panther applications can participate in this inter-application communication by implementing Web Services, multiplying the value within them. In this way, systems throughout your enterprise, or beyond, can benefit from the existing code and data within your Panther applications.
You can also utilize your Panther tools and skills to create new RAPID Database Transactional Web Services for just about any application. This is totally independent from your existing Panther applications and utilizes the same rapid development platform.
For a complimentary Discovery Call, please call your Business Development Manager at 1 (800) 458-3313 ext 2 or email crm@prolifics.com.
Monday, May 16, 2011
Prolifics BPM Methodology - 5 Steps to Improve Your Process and Build Your Evidence-Based Business Case
Business process improvement is a systematic approach that helps organizations become more efficient by optimizing their core business processes to increase productivity and reduce cost; business process improvement initiatives have emerged to become essential drivers for organizations to compete in a rapidly and unpredictably changing market. According to a Gartner EXP Survey, improving business processes has been one of the top 5 business priorities for the past 5 consecutive years.
The business process improvement approach is a series of actions taken by a process owner to improve a business process to meet a new goal defined by the organization. Those actions have to follow a methodology or a framework in order to create successful improvement results.
Any process improvement methodology consists of 3 macro level steps that occur in the following order:
In this white paper, I will present Prolifics' methodology for process improvement. The methodology is designed to address those fundamental challenges with traditional process improvement approaches; it also provides a simple road map for process improvement that is powered by innovative technologies that will guide you step by step in your process improvement journey and expedite the process improvement cycle. This methodology is presented in the context of a real customer initiative to improve a core business process.
To read this white paper, click here.
Hanna Aljaliss is a Solution Architect in the BPM & Connectivity practice at Prolifics. He has over 7 years of consulting experience in the IT field - 5 of those were focused on IBM Business Process Management and SOA implementations. He has led several major enterprise initiatives across different industries from the conceptual stage to the live solution stage. Hanna holds a Master degree in computer engineering from Stevens Tech and has been a frequent presenter at the IBM's Premier Conference for Business and IT Leaders (IMPACT).
The business process improvement approach is a series of actions taken by a process owner to improve a business process to meet a new goal defined by the organization. Those actions have to follow a methodology or a framework in order to create successful improvement results.
Any process improvement methodology consists of 3 macro level steps that occur in the following order:
In this white paper, I will present Prolifics' methodology for process improvement. The methodology is designed to address those fundamental challenges with traditional process improvement approaches; it also provides a simple road map for process improvement that is powered by innovative technologies that will guide you step by step in your process improvement journey and expedite the process improvement cycle. This methodology is presented in the context of a real customer initiative to improve a core business process.
To read this white paper, click here.
Hanna Aljaliss is a Solution Architect in the BPM & Connectivity practice at Prolifics. He has over 7 years of consulting experience in the IT field - 5 of those were focused on IBM Business Process Management and SOA implementations. He has led several major enterprise initiatives across different industries from the conceptual stage to the live solution stage. Hanna holds a Master degree in computer engineering from Stevens Tech and has been a frequent presenter at the IBM's Premier Conference for Business and IT Leaders (IMPACT).
Monday, April 18, 2011
Converting your Legacy JAM Application into a Panther Web Application
Converting a legacy JAM/Panther 2-tier application into a Panther Web Application offers a significant advantage: a conversion allows reusing a significant portion of the existing code as most JPL and C functions continue to be fully functional.
Although straightforward, the conversion process is not trivial or automatic. The conversion process does present some challenges and involves making changes and additions to the existing code.
In this document, I start by quickly describing some key differences between a JAM/Panther 2-tier application running on a GUI environment and a Panther application running on the web. Then, I proceed to discuss aspects of the application that are reviewed during the process of converting a GUI application to the web.
Key differences between a GUI application and a Web application
In a GUI environment, when a JAM/Panther application is executed, it runs on a dedicated process that performs several tasks for the application: this one process makes the calls required to display the screens and widgets to the user, handles the screen event cycle and maintains the application state. In this same process, all the screens and JPL code are loaded and executed. This process, also, connects to the backend, which is typically a database that is accessed through the Panther DBi.
When an application is executed on the web, the architecture is quite different. For starters, instead of having one process perform virtually all the tasks required for the application to work, several processes (residing in different hosts) are involved in performing the tasks required for an application to run on the web.
On the web, the browser is the only program running on the client computer. It allows the user to interact with the web pages dynamically generated on the server. Once a given page is displayed to the user, there is no interaction with the server until the user performs an action that results in submitting a request to the server. More specifically, as the user interacts with a page in the browser, several events are generated and they can be divided into two groups: events that are handled locally by the browser alone (for example, when the user tabs between the fields in the page) and events that require server processing (for example, when the user clicks on a push button to perform a search in the database).
When an event in this second group occurs, the browser submits a request to the HTTP server. The request is then passed along to an available Panther Web Application process, which executes the appropriate Panther code and replies with a new rendition of the screen in the form of a new HTML page. This HTML code is then transmitted back to the browser and displayed to the user.
This is perhaps the root cause of most of the changes required for converting an application to the web: whereas in a GUI environment, a single process continually handles the screen event cycle, executes the appropriate Panther code and displays the application screens using the platform’s native API. On the web, these operations are split between the browser (which renders the HTML code it receives and maintains its own event cycle as the user interacts with it) and the Panther Web Application processes residing on the server (which receive requests from the clients, execute the appropriate Panther code, and reply with HTML code that is sent back to the browser).
The Panther Web Application processes that actually execute the Panther code are a pool of processes called Jservers. Each of these Jserver processes handles one request at a time, and generates a response. Being stateless processes, the Jservers retain no memory of previous transmissions: as soon as a Jserver process has produced the reply for a request, it again becomes available to process more requests, which may come from the same user session or, in most cases, from an altogether different user session. Using stateless processes is a common practice for web applications because they allow excellent scalability: a small number of stateless processes can handle requests coming from a large number of users.
Maintaining the Application state
So, if the Jservers are stateless processes: how is the application state maintained for each user session on the web? The short answer is: by caching data.
Panther Web automatically caches application state data, such as the values of hidden widgets, scroll state of widgets, and bundles.
Two modes of caching data are supported: embedding the cached data in the generated HTML code, or keeping the cached data on the server and embedding just a reference to the cached data in the generated HTML code.
Panther Web also provides functions to define and use context global variables from JPL code. Such variables are set by a specific user session and remain private to that user session.
Functions to store and retrieve data in HTTP cookies are also provided.
HTML Generation
As described previously, the application screens are dynamically rendered as HTML code to be presented on the browser. Panther automatically generates the HTML code for the screens and the widgets in them.
The HTML generated by Panther may need to be customized, mainly for 2 reasons: to fine-tune the visual appearance of the screen on the web and to integrate JavaScript code.
Using the Panther editor, the name of a pre-existing HTML document can be specified in the HTML-template property of a screen thereby providing the structure of the HTML generated for the screen. This allows you the flexibility to determine how the HTML for the screen is generated. The provided HTML template is tied to the Panther backend by embedding Panther-provided tags into it, thus specifying the exact location where the HTML code for the dynamic elements are to be included in the resulting HTML page.
Custom HTML properties can also be set for the individual widgets on a screen. These properties allow making additions or changes to the HTML attributes within the INPUT element that Panther generates for a widget.
These properties can also be used to hook in JavaScript functions and JavaScript libraries such as Dojo and jQuery.
To give you an example of the kind of things that can be done by customizing the HTML generation, see the screen shot of an application screen in the Panther editor.
By including the code shown below in the screen JPL, the attribute property of the Single Line Text widget called “i_odate” is modified before Panther generates the HTML for the screen:
When the screen is displayed in the browser, see how the widget is no longer displayed just as an input field, but as a Dojox calendar widget.
Navigation
GUI applications typically have menu bars that allow the user to navigate between screens. On the web, there is no natural replacement for menu bars and many alternatives are available for providing navigation controls on the web. During the conversion, it is necessary to select the one that better suits your needs.
Web Event Handling
You can provide JPL procedures with the names listed below, and those procedures get executed as events occur in a web application:
web_startup – This procedure is called when a Jserver process is started. The code to open the connection to the database is typically invoked from this JPL function and the database connection is maintained through the whole life of the Jserver process. This procedure can also be used to load public procedures and data used through the application, specify database error handlers, and define global variables.
web_enter – Each screen can have its own implementation of this procedure. It gets called after the screen entry event and before the web browser data is loaded into the Panther screen structure. This is only called once on each request submitted.
web_exit – Each screen can also have its own implementation of this procedure. This is invoked after all other events have been processed and immediately before Panther dynamically generates the HTML output for the request being processed.
web_shutdown – This procedure is invoked when the Jserver process is shutting down. This is where any required application clean up is typically invoked, including the code to close database connections.
Conclusion
Several approaches need to be evaluated when facing the prospect of making an existing Panther GUI application available on the Internet or an intranet.
This document has provided you with a glimpse of the differences between the two environments and has presented aspects of the application that need to be addressed during a conversion. Hopefully this information has piqued your interest about converting GUI applications to the web in general and the Panther Web product in particular.
Eduardo Ramos is a Project Manager at Prolifics. He has over 16 years of experience in the IT field, specializing in the development and migration of multi-tier applications using various technologies including Panther and the IBM WebSphere family of products.
Although straightforward, the conversion process is not trivial or automatic. The conversion process does present some challenges and involves making changes and additions to the existing code.
In this document, I start by quickly describing some key differences between a JAM/Panther 2-tier application running on a GUI environment and a Panther application running on the web. Then, I proceed to discuss aspects of the application that are reviewed during the process of converting a GUI application to the web.
Key differences between a GUI application and a Web application
In a GUI environment, when a JAM/Panther application is executed, it runs on a dedicated process that performs several tasks for the application: this one process makes the calls required to display the screens and widgets to the user, handles the screen event cycle and maintains the application state. In this same process, all the screens and JPL code are loaded and executed. This process, also, connects to the backend, which is typically a database that is accessed through the Panther DBi.
When an application is executed on the web, the architecture is quite different. For starters, instead of having one process perform virtually all the tasks required for the application to work, several processes (residing in different hosts) are involved in performing the tasks required for an application to run on the web.
On the web, the browser is the only program running on the client computer. It allows the user to interact with the web pages dynamically generated on the server. Once a given page is displayed to the user, there is no interaction with the server until the user performs an action that results in submitting a request to the server. More specifically, as the user interacts with a page in the browser, several events are generated and they can be divided into two groups: events that are handled locally by the browser alone (for example, when the user tabs between the fields in the page) and events that require server processing (for example, when the user clicks on a push button to perform a search in the database).
When an event in this second group occurs, the browser submits a request to the HTTP server. The request is then passed along to an available Panther Web Application process, which executes the appropriate Panther code and replies with a new rendition of the screen in the form of a new HTML page. This HTML code is then transmitted back to the browser and displayed to the user.
This is perhaps the root cause of most of the changes required for converting an application to the web: whereas in a GUI environment, a single process continually handles the screen event cycle, executes the appropriate Panther code and displays the application screens using the platform’s native API. On the web, these operations are split between the browser (which renders the HTML code it receives and maintains its own event cycle as the user interacts with it) and the Panther Web Application processes residing on the server (which receive requests from the clients, execute the appropriate Panther code, and reply with HTML code that is sent back to the browser).
The Panther Web Application processes that actually execute the Panther code are a pool of processes called Jservers. Each of these Jserver processes handles one request at a time, and generates a response. Being stateless processes, the Jservers retain no memory of previous transmissions: as soon as a Jserver process has produced the reply for a request, it again becomes available to process more requests, which may come from the same user session or, in most cases, from an altogether different user session. Using stateless processes is a common practice for web applications because they allow excellent scalability: a small number of stateless processes can handle requests coming from a large number of users.
Maintaining the Application state
So, if the Jservers are stateless processes: how is the application state maintained for each user session on the web? The short answer is: by caching data.
Panther Web automatically caches application state data, such as the values of hidden widgets, scroll state of widgets, and bundles.
Two modes of caching data are supported: embedding the cached data in the generated HTML code, or keeping the cached data on the server and embedding just a reference to the cached data in the generated HTML code.
Panther Web also provides functions to define and use context global variables from JPL code. Such variables are set by a specific user session and remain private to that user session.
Functions to store and retrieve data in HTTP cookies are also provided.
HTML Generation
As described previously, the application screens are dynamically rendered as HTML code to be presented on the browser. Panther automatically generates the HTML code for the screens and the widgets in them.
The HTML generated by Panther may need to be customized, mainly for 2 reasons: to fine-tune the visual appearance of the screen on the web and to integrate JavaScript code.
Using the Panther editor, the name of a pre-existing HTML document can be specified in the HTML-template property of a screen thereby providing the structure of the HTML generated for the screen. This allows you the flexibility to determine how the HTML for the screen is generated. The provided HTML template is tied to the Panther backend by embedding Panther-provided tags into it, thus specifying the exact location where the HTML code for the dynamic elements are to be included in the resulting HTML page.
Custom HTML properties can also be set for the individual widgets on a screen. These properties allow making additions or changes to the HTML attributes within the INPUT element that Panther generates for a widget.
These properties can also be used to hook in JavaScript functions and JavaScript libraries such as Dojo and jQuery.
To give you an example of the kind of things that can be done by customizing the HTML generation, see the screen shot of an application screen in the Panther editor.
By including the code shown below in the screen JPL, the attribute property of the Single Line Text widget called “i_odate” is modified before Panther generates the HTML for the screen:
When the screen is displayed in the browser, see how the widget is no longer displayed just as an input field, but as a Dojox calendar widget.
Navigation
GUI applications typically have menu bars that allow the user to navigate between screens. On the web, there is no natural replacement for menu bars and many alternatives are available for providing navigation controls on the web. During the conversion, it is necessary to select the one that better suits your needs.
Web Event Handling
You can provide JPL procedures with the names listed below, and those procedures get executed as events occur in a web application:
web_startup – This procedure is called when a Jserver process is started. The code to open the connection to the database is typically invoked from this JPL function and the database connection is maintained through the whole life of the Jserver process. This procedure can also be used to load public procedures and data used through the application, specify database error handlers, and define global variables.
web_enter – Each screen can have its own implementation of this procedure. It gets called after the screen entry event and before the web browser data is loaded into the Panther screen structure. This is only called once on each request submitted.
web_exit – Each screen can also have its own implementation of this procedure. This is invoked after all other events have been processed and immediately before Panther dynamically generates the HTML output for the request being processed.
web_shutdown – This procedure is invoked when the Jserver process is shutting down. This is where any required application clean up is typically invoked, including the code to close database connections.
Conclusion
Several approaches need to be evaluated when facing the prospect of making an existing Panther GUI application available on the Internet or an intranet.
This document has provided you with a glimpse of the differences between the two environments and has presented aspects of the application that need to be addressed during a conversion. Hopefully this information has piqued your interest about converting GUI applications to the web in general and the Panther Web product in particular.
Eduardo Ramos is a Project Manager at Prolifics. He has over 16 years of experience in the IT field, specializing in the development and migration of multi-tier applications using various technologies including Panther and the IBM WebSphere family of products.
Thursday, March 10, 2011
Outsourcing IT
I’ve been thinking recently about the whole “Cloud” thing, “Cloud computing”, “Cloud hosting”, “Identity Management in the Cloud”, cloud-this and cloud-that. In an essence, it all seems be a business telling to its IT department – you are too expensive. We want to get rid of you, without getting rid of the services you provide.
Business knows that an IT department is important. It saves money in many ways, keeps the back-office running and helps in executing business processes. But in many organizations IT costs too much, with all its security, high availability, disaster recovery, compliance and support requirements. Business cringes seeing all the capital job proposals and budgets for IT spendings. This is why they are looking for an alternative. Say, an alternative, that gives the back-office support without having to worry about all the high-ticket items, like HA, DR and GRC. Items that IT seems to stick every year on the annual budget proposals. An this is exactly what the “cloud” tries to provide. The cloud is an abstracted business function, where all high-ticket IT items are spread over multiple clients and thus are cheaper to have for any particular client. The IT department, after all, is just a business paid expense, that has no real, intrinsic value all by itself.
The business, of course, wants the high level of service, the good “Service Level Agreement” to cover the needs of the business. This is where we enter the world of ITIL. The SLA’s the ITIL are a step in getting IT outsourced. An SLA’s without a extra value is a way to make IT separable, commoditizable. I am not saying they are bad. I am saying if you exceed at delivering the services on the SLA’s without bringing benefits to a business, you are no different than a third party outlet selling server time for a monthly fee.
So, before you dismiss the “cloud” business as yet another popular, but short lived word in the IT vernacular, think of the implications that this model has for the future of IT. There is a trend of businesses cutting back on the IT departments. I really see only one way for the IT department to survive this transition. IT can live on by becoming a cloud integration department. On the low level, someone needs to integrate in-house systems with the clouds during and after the transition to could based services. On the high level, someone needs to understand the business and to know how to map it to the services different clouds provide.
Granted, it may take a decade before the onslaught of the clouds, depending on how much push the business is doing toward cost-cutting, but start training up now for one of these roles, if you are working in an IT department.
PS. Yes, the cloud providers will need the IT skills to develop and maintain the cloud offerings, but the number of jobs will be much smaller compared to the in-house IT staff.
To see the original blog entry, please click here.
Alex Ivkin is a senior IT Security Architect with a focus in Identity and Access Management at Prolifics. Mr. Ivkin has worked with executive stakeholders in large and small organizations to help drive security initiatives. He has helped companies succeed in attaining regulatory compliance, improving business operations and securing enterprise infrastructure. Mr. Ivkin has achieved the highest levels of certification with several major Identity Management vendors and holds the CISSP designation. He is also a speaker at various conferences and an active member of several user communities.
Business knows that an IT department is important. It saves money in many ways, keeps the back-office running and helps in executing business processes. But in many organizations IT costs too much, with all its security, high availability, disaster recovery, compliance and support requirements. Business cringes seeing all the capital job proposals and budgets for IT spendings. This is why they are looking for an alternative. Say, an alternative, that gives the back-office support without having to worry about all the high-ticket items, like HA, DR and GRC. Items that IT seems to stick every year on the annual budget proposals. An this is exactly what the “cloud” tries to provide. The cloud is an abstracted business function, where all high-ticket IT items are spread over multiple clients and thus are cheaper to have for any particular client. The IT department, after all, is just a business paid expense, that has no real, intrinsic value all by itself.
The business, of course, wants the high level of service, the good “Service Level Agreement” to cover the needs of the business. This is where we enter the world of ITIL. The SLA’s the ITIL are a step in getting IT outsourced. An SLA’s without a extra value is a way to make IT separable, commoditizable. I am not saying they are bad. I am saying if you exceed at delivering the services on the SLA’s without bringing benefits to a business, you are no different than a third party outlet selling server time for a monthly fee.
So, before you dismiss the “cloud” business as yet another popular, but short lived word in the IT vernacular, think of the implications that this model has for the future of IT. There is a trend of businesses cutting back on the IT departments. I really see only one way for the IT department to survive this transition. IT can live on by becoming a cloud integration department. On the low level, someone needs to integrate in-house systems with the clouds during and after the transition to could based services. On the high level, someone needs to understand the business and to know how to map it to the services different clouds provide.
Granted, it may take a decade before the onslaught of the clouds, depending on how much push the business is doing toward cost-cutting, but start training up now for one of these roles, if you are working in an IT department.
PS. Yes, the cloud providers will need the IT skills to develop and maintain the cloud offerings, but the number of jobs will be much smaller compared to the in-house IT staff.
To see the original blog entry, please click here.
Alex Ivkin is a senior IT Security Architect with a focus in Identity and Access Management at Prolifics. Mr. Ivkin has worked with executive stakeholders in large and small organizations to help drive security initiatives. He has helped companies succeed in attaining regulatory compliance, improving business operations and securing enterprise infrastructure. Mr. Ivkin has achieved the highest levels of certification with several major Identity Management vendors and holds the CISSP designation. He is also a speaker at various conferences and an active member of several user communities.
Tuesday, March 8, 2011
Enterprise Single Sign-On Tug of War
A desktop based Single Sign-On solution is a joy to have, if you are a desktop user. Equally, it is a pain to have if you work for an IT department and have to support it. It looks like the middle line is very thin in many organizations and the way it moved often determines success of an Enterprise Single Sign-On implementation. Here is a quick list of the typical gripes and the responses one can provide to pull the rope to the ESSO favor.
To see the original blog entry, please click here.
Alex Ivkin is a senior IT Security Architect with a focus in Identity and Access Management at Prolifics. Mr. Ivkin has worked with executive stakeholders in large and small organizations to help drive security initiatives. He has helped companies succeed in attaining regulatory compliance, improving business operations and securing enterprise infrastructure. Mr. Ivkin has achieved the highest levels of certification with several major Identity Management vendors and holds the CISSP designation. He is also a speaker at various conferences and an active member of several user communities.
- Desktop support team: Man, it replaces the Microsoft Gina. We need to provision it to all of the existing desktops, test it on our gold build, communicate with all the user population affected…It’ll take more than you think to implement it.
- Business: Ok, so let’s see how well you manage your assets. If you know them, can provision them and keep them homogeneous you should not have too many problems. If not, let’s work on the asset management first.
- Infrastructure: Users want to be automatically logged in to an enterprise app that is not covered by ESSO yet. Now we’ve got to develop another profile. This is not easy. The development, testing and support will take a lot of time.
- Business: Yes, it is the on-going cost of the ESSO. Either engage the vendors, get the training and do it in-house, or outsource it.
- Infrastructure: Now we have to have staff to support another server, another database and a bunch of desktops.
- Security: Hey, but no more sticky notes under keyboards with passwords.
- Help desk: We are getting more calls about desktop apps incompatible with the ESSO.
- Business: The incompatible apps will have to be worked through with the desktop support and the vendors.
- Security: We do not want to accept the responsibility for accidentally exposing all personal logins people may store in ESSO, like passwords for web-mail, Internet banking, shopping, forums, you name it.
- Consultant: Set ESSO up with a personal, per-user key encryption. The downside though is if a user changes their passwords and then forgets their response to a challenge question, they will loose their stored passwords.
- Help desk: Everybody is forgetting their responses to the challenge questions. People are unhappy about having to lose their stored passwords.
- Consultant: Set ESSO up with a global key, and let the Security department worry about an appropriate use policy and the privacy policy.
- Security: We do not want to send people their on-boarding passwords plain-text in an e-mail or print them out.
- Consultant: Integrate your ESSO with an identity management solution and have it automatically distribute passwords to people’s wallets.
- Infrastructure: All the setup, configuration and support takes so much time!
- Business and End Users: Hey, it is nice not to have to type enterprise passwords every time. Helpdesk is getting less calls about recovery of forgotten passwords. It saves so much time!
To see the original blog entry, please click here.
Alex Ivkin is a senior IT Security Architect with a focus in Identity and Access Management at Prolifics. Mr. Ivkin has worked with executive stakeholders in large and small organizations to help drive security initiatives. He has helped companies succeed in attaining regulatory compliance, improving business operations and securing enterprise infrastructure. Mr. Ivkin has achieved the highest levels of certification with several major Identity Management vendors and holds the CISSP designation. He is also a speaker at various conferences and an active member of several user communities.
Tuesday, February 22, 2011
The Not-So-Secret, Secret MQ Script
For those of us who work with IBM products, we all know the power of the Information Center, or better known as the Info Center. At a client in lovely Tampa, Florida, myself and Infrastructure Practice Director, AJ Aronoff, were tasked with installing WebSphere MQ v7 and WebSphere MQ File Transfer Edition v7.0.2 onto a SUSE Enterprise Linux v11 system.
Now for those who have not installed WebSphere MQ on Linux and Unix systems, certain kernel parameters pertaining to semaphores and shared memory must be set above a certain minimal level. If these are not set, MQ may not operate correctly, which on a production system, only spells disaster. The WebSphere MQ Info Center has a “Quick Beginnings for Linux” section, which walks users through pre-installation tasks that need to be completed. Naturally, there is a section about setting the kernel parameters.
This section tells users to run the command “ipcs –l”, which displays the kernel parameters and their current settings, and provides an example of the minimal settings that MQ Server requires. The “ipcs –l” command will display the parameters in the format shown below:
One would think this format would allow an admin to check the parameter settings that MQ requires, make the changes, and move onto the install. The problem is that the Info Center page doesn’t provide this format. It provides the requirement like so:
Now examining these two formats for long enough, you can determine some of the possible correlations. But others, such as the kernel.sem setting, can be interpreted in many ways, as some of the values could be set for multiple parameters. Research provides more hints about the other settings, such as their short name, but no solid evidence for the kernel.sem parameter. There is, however, an IBM support page devoted purely to this little problem, but also doesn’t provide a concrete translation of the kernel.sem parameter. This page would probably be ignored by an amateur user, as the title states “Unix IPC resources” instead of ‘kernel parameters’ and ‘Linux’, but by looking back at the “Quick Beginnings” page, one notices the first sentence reads “System V IPC resources”. IBM hid our now not-so-secret script, mqconfig, on this page, as long as you don’t scroll right past it. The script reads kernel and software information about the system you are running it on, compares them to the IBM standards for MQ, and prints out if the system passes or fails each of the necessary parameters.
Once the failed settings have been changed, by copying the proper settings into the sysctl.conf file, and the script is run again, the output looks like this:
So for those of you other than AJ and myself who will be installing MQ on Linux or Unix, save yourself some time and a headache, and use this handy script. It can be found here: http://www-01.ibm.com/support/docview.wss?rs=171&context=SSFKSJ&dc=DB520&dc=DB560&uid=swg21271236&loc=en_US&cs=UTF-8&lang=en&rss=ct171websphere
Patrick Brady is a Consultant at Prolifics based out of New York City. He has 3 years of consulting experience based around the WebSphere family of products, focusing on the administration side of customer implementations. He specializes in High Availability solutions for WebSphere MQ and Message Broker.
Now for those who have not installed WebSphere MQ on Linux and Unix systems, certain kernel parameters pertaining to semaphores and shared memory must be set above a certain minimal level. If these are not set, MQ may not operate correctly, which on a production system, only spells disaster. The WebSphere MQ Info Center has a “Quick Beginnings for Linux” section, which walks users through pre-installation tasks that need to be completed. Naturally, there is a section about setting the kernel parameters.
This section tells users to run the command “ipcs –l”, which displays the kernel parameters and their current settings, and provides an example of the minimal settings that MQ Server requires. The “ipcs –l” command will display the parameters in the format shown below:
One would think this format would allow an admin to check the parameter settings that MQ requires, make the changes, and move onto the install. The problem is that the Info Center page doesn’t provide this format. It provides the requirement like so:
Now examining these two formats for long enough, you can determine some of the possible correlations. But others, such as the kernel.sem setting, can be interpreted in many ways, as some of the values could be set for multiple parameters. Research provides more hints about the other settings, such as their short name, but no solid evidence for the kernel.sem parameter. There is, however, an IBM support page devoted purely to this little problem, but also doesn’t provide a concrete translation of the kernel.sem parameter. This page would probably be ignored by an amateur user, as the title states “Unix IPC resources” instead of ‘kernel parameters’ and ‘Linux’, but by looking back at the “Quick Beginnings” page, one notices the first sentence reads “System V IPC resources”. IBM hid our now not-so-secret script, mqconfig, on this page, as long as you don’t scroll right past it. The script reads kernel and software information about the system you are running it on, compares them to the IBM standards for MQ, and prints out if the system passes or fails each of the necessary parameters.
Once the failed settings have been changed, by copying the proper settings into the sysctl.conf file, and the script is run again, the output looks like this:
So for those of you other than AJ and myself who will be installing MQ on Linux or Unix, save yourself some time and a headache, and use this handy script. It can be found here: http://www-01.ibm.com/support/docview.wss?rs=171&context=SSFKSJ&dc=DB520&dc=DB560&uid=swg21271236&loc=en_US&cs=UTF-8&lang=en&rss=ct171websphere
Patrick Brady is a Consultant at Prolifics based out of New York City. He has 3 years of consulting experience based around the WebSphere family of products, focusing on the administration side of customer implementations. He specializes in High Availability solutions for WebSphere MQ and Message Broker.
Subscribe to:
Posts (Atom)