AJ Aronoff is a Practice Director and IBM Champion covering Application Infrastructure at Prolifics. Recently at IBM Impact, AJ was interviewed on current infrastructure demands in the healthcare industry. Specializing in solutions for the healthcare industry, AJ explains how he can help the industry understand the latest healthcare regulations. In this video, he discusses the value that IBM WebSphere Application Server (WAS) brings to the industry, and the latest advancements in WAS 8.5. In addition, AJ explains the impact mobile technology is having on the healthcare space, opening up the possibility to cut down on a lot of the paperwork as well as allowing doctors and medical staff to record information at each bedside.
AJ Aronoff is the Application Infrastructure Practice Director for Prolifics and an IBM Champion for IBM WebSphere. AJ first joined Prolifics as a Developer, then specialized in WebSphere MQ. He has 25 years experience in the IT field — 17 of those years at Prolifics. As a Prolifics consultant, he has done MQ design, implementation, infrastructure, monitoring and security assignments at several large financial, insurance, retail and communication firms (Bloomberg, Credit Suisse, Deutsche Bank, DTCC, Fidelity, ITG, JPMC, Och Zif, Tokyo Marine, Pep Boys and British Telecom). He has presented on security and infrastructure at Impact, Hursley comes to Minneapolis and Palisades, and MQ User Groups. His customers use Omegamon to monitor over a thousand systems across the globe.
Define Application Scalability in two ways: a. Vertical b. Horizontal
Vertical scalability is defined as the scalability of an application as additional CPUs are added to the same server
Horizontal scalability is defined as the scalability of an application as additional servers are added to the environment
Scalability of an application is important for large enterprise Portal deployments where an individual server cannot support the anticipated load. In such cases, the Portal provides the capability to cluster multiple instances of the application. This can be achieved by either installing multiple instances of the Portal on a single server with a large number of CPUs or having each instance run in a separate Java Virtual Machine (JVM), a combination of both
Performance improvements are depends an operating system, application server and JVM handle the scheduling of threads across a larger number of CPUs. This is also subject to change as subsequent versions of JVMs are released to market. In addition, careful tuning of the application server is required to ensure that the load on a single instance can be most effectively supported (e.g., Web Sphere Web Container Thread Pool parameters)
For horizontal scalability, the only limitation on the scalability of the Portal will most likely be from the network or database perspective (the database is a shared resource between all instances of the Portal in a particular cluster). Any other system component that is shared by all Portal instances (e.g., LDAP Directory server) can potentially create a bottleneck and prevent further scaling of the application.
Portal instances running in a clustered environment it is required Portal network throughput is below 70% of the maximum network bandwidth available at peak periods (i.e. maximum of 8 MB/sec on a 100 Mb/sec network). Network throughput levels above this will impact the average response times for end users during these periods.
Database CPU utilization of above 75% at peak periods can greatly impact the Portal performance depending on the types of queries being executed at that time
If a web server (e.g., Apache, Sun ONE, IIS) is using proxy to accept request from the application server in a production environment, it is recommended to host the web server on a separate physical server. Such a configuration may be encountered in an Internet deployment with a DMZ (Demilitarized Zone)
The net effect of this is that more resources, in the form of memory and CPU, will be available to the application server to process requests. This will have a small impact on the response times due to an additional network hop but will increase the overall throughput of the application due to the additional CPU resources that will be available
Tuning parameters for Web Sphere & Apache Tomcat application servers WebSphere:
The optimal number for the Web Container Thread Pool Maximum Size is 75 (default is 50). Both lower and higher values result in slightly lower throughput.
Increasing Thread Pool Minimum Size has adverse effect on performance.
Similarly, increasing Thread Inactivity Timeout has a slightly negative effect on performance.
Running Performance Monitoring Service introduces about 30% overhead
The development time issues relate to how the Java code for the Web application was designed and implemented. Again, there is a whole set of implementation best practices surrounding this area, such as:
Do not create sessions for JSPs if they are not required
Do not store large objects in your session.
Time out sessions quickly, and invalidate your sessions when you are done with them.
Use the right scope for objects.
Use connection pooling for improving performance.
Cache static data.
Use transfer objects to minimize calls to remote services.
Minimize logging from Web applications, or use simple logging formats
For large enterprise deployments of the Portal, it is recommended that a dedicated database server be used to host the Portal schema. In addition, certain database subsystems (e.g., MetaStore, Users) can be further distributed across several database servers. Deploying the Portal schema in this manner will help to ensure that the database will not become a bottleneck as the Portal is scaled to support a continually growing user base Oracle 11g:
To assist in the rollout, build a list of tasks that increase the chance of optimal performance in production and enable rapid debugging of the application. Do the following:
When you create the control file for the production database, allow for growth by setting MAXINSTANCES, MAXDATAFILES, MAXLOGFILES, MAXLOGMEMBERS, and MAXLOGHISTORY to values higher than what you anticipate for the rollout. This technique results in more disk space usage and larger control files, but saves time later should these need extension in an emergency
Set block size to the value used to develop the application. Export the schema statistics from the development or test environment to the production database if the testing was done on representative data volumes and the current SQL execution plans are correct
Set the minimal number of initialization parameters. Ideally, most other parameters should be left at default. If there is more tuning to perform, then this appears when the system is under load
Be prepared to manage block contention by setting storage options of database objects. Tables and indexes that experience high INSERT/UPDATE/DELETE rates should be created with automatic segment space management. To avoid contention of rollback segments, use automatic undo management
All SQL statements should be verified to be optimal and their resource usage understood
Validate that middleware and programs that connect to the database are efficient in their connection management and do not logon or logoff repeatedly
Validate that the SQL statements use cursors efficiently. The database should parse each SQL statement once and then execute it multiple times. The most common reason this does not happen is because bind variables are not used properly and WHERE clause predicates are sent as string literals. If you use precompiles to develop the application, then make sure to reset the parameters MAXOPENCURSORS, HOLD_CURSOR, and RELEASE_CURSOR from the default values before precompiling the application
Validate that all schema objects have been correctly migrated from the development environment to the production database. This includes tables, indexes, sequences, triggers, packages, procedures, functions, Java objects, synonyms, grants, and views. Ensure that any modifications made in testing are made to the production system
As soon as the system is rolled out, establish a baseline set of statistics from the database and operating system. This first set of statistics validates or corrects any assumptions made in the design and rollout process
Prolifics kicks off SAPPHIRE NOW & ASUG Annual Conference this week in Orlando. At the conference, we will be showcasing our SAP testing solutions and share recent success stories with customers like McKesson and Pacific Coast. At booth #2715, we will also showcase a demo of our Test Automation and Data Validation tool - Effecta.
Get a sneak peek of Effecta by viewing the demo below.
At IBM Impact, I had the opportunity to co-present with one of Prolifics' BPM clients Broadcast Music, Inc. The exciting thing about this presentation is that it we introduced BPM in the context of the dynamic and rapidly changing music industry. With new digital music outlets such as Pandora, Hulu and others, music is no longer confined to vinyl albums, CDs or traditional venues like live concerts, karaoke bars or radio. Music is truly global and ubiquitous which presents interesting new process challenges for those in the music business.
Howard Webb is a Director of Prolifics' BPM Advisory Services. Howard and his team provides consulting and guidance to clients in transitioning to highly efficient Process Managed business models, and equips them for success in their BPM initiatives. For over 25 years he has been a consultant, trainer, facilitator, and speaker on the topics of Business Process Management (BPM), data architecture, and project management. He founded the Midwest BPM Users Group and has published articles on BPM and enterprise architecture. Prior to coming to Prolifics Howard was founder and partner of Bizappia, a consulting and services firm focused on business agility, performance and innovation. Prior to Bizappia, he was a Sr. BPM Technical Specialist with IBM.