Tuesday, October 27, 2009
OpenID is an identity sharing and a single sign on protocol, that is becoming more and more popular on the net. OpenID allows us to use a single authenticating source (aka an identity provider) to login into any site that accepts OpenIDs (aka a service provider) without the need to create an account on that site. Yahoo!, Google, AOL, SourceForge, Facebook and many others now support it now. A great idea, but unfortunately it comes with some big holes.
What OpenID means, in an essence, is that you are entrusting all your account accesses to a single source. You trust your identity provider to safeguard your personal information until you decide to use it. So, to no surprise, most of the attack vectors are targeting this trust relationship.
Spoofing an identity provider
If you use one of the common identity providers, say myopenid.com, you need to be aware of identity phishers. An attacker could devise a site, that, after asking you to login with an OpenID, sends you to a myopenid-look-a-like.com. You, trustingly, enter your OpenID login information, and, boom, your id and your password that opens access to all you OpenID accounts are in the wrong hands.
The switch user attack
If you are one of the paranoid types and host your own identity provider, say via a Wordpress OpenID plugin, you may succumb to a URL hijacking technique. If attackers gain an ability to modify pages on your site (PHP is great at that), they then could modify the headers on your pages to redirect openid validation requests to their own identity provider. With the redirect configured, when they log in into a service provider with your OpenID URL, the service provider will authenticate against attackers’ own identity provider, thus making them appear as you, anywhere they go. We’ve proven this scenario on our host, and it is very viable and very scary.
OpenID URL hijacking
Another set of attacks targets the OpenID URL. An Open ID URL is your unique identifier on the net to the service providers. If someone gains control over the URL, either due to DNS manipulation (google DNS attacks) or site hacking, they have a key to all your accounts. An example would be to trick a service provider into resolving your OpenID URL to an attacker’s site that uses attacker’s identity provider, thus making the service provider trust an attacker, posing under the URL of the victim. The use of i-Numbers in lieu of URL’s is supposed to help with this issue, but they are not yet widely supported.
Cross site request forgeries
OpenID does not validate all of the traffic going between the identity provider and service provider in a user browser via hidden i-frames. A malicious site could supply your browser could with a page that, knowing your openid from the cookies, could determine your identity provider name and automate actions to any number of service providers, acting on your behalf. The actions could range from creating accounts under your name to divulging details of your existing accounts on these sites. Secunia provided detailed research on this type of the XSS.
OpenID sign on process makes it really easy for automated processes to login or create accounts on the fly. A spammer could create an identity provider validating its own id’s at a rate of hundreds a second and then supply them to the service providers. This could be mitigated by pairing an openid field with a captcha field, but it is not supported by most OpenID service providers right now.
Yes, there are bugs, both in the specifications and the technical implementations. I would not go in to details here, since these are typically short lived and are addressed by the vendors in an on-going basis. The holes are exploited by the hackers and are expected for any new technology appearing on the web. The problem is that the stake with OpenID is a lot higher. Loosing an OpenID means not only losing your ID but also losing a multitude of accounts and associated personal information.
OpenID keeps your ID off your hands and on the net, the place that you have no control over. I am sure, current OpenID providers will work hard to make sure they are well protected to retain your trust, but rest assured, there will be breaches. Identity provides are very attractive targets to hackers, since they act as gateways to a wide array of accounts. And when this happens all your accounts are potentially lost, not just one. Thus, OpenID should be treated as a convenience, not a way to increase security of your accounts. From another perspective, assuming Linus’ law holds, I do not see OpenID going the Microsoft Passport way. OpenID has its advantage in being open and freely available.
Nonetheless, until OpenID is mature from the security prospective, like SSL and GPG, I am sticking with managing my accounts in an encrypted web browser’s password store. It’s almost as convenient and a lot better protected. After all, you keep your driver’s license in your own wallet, not posted on the web.
Alex Ivkin is a senior IT Security Architect with a focus in Identity and Access Management at Prolifics. Mr. Ivkin has worked with executive stakeholders in large and small organizations to help drive security initiatives. He has helped companies succeed in attaining regulatory compliance, improving business operations and securing enterprise infrastructure. Mr. Ivkin has achieved the highest levels of certification with several major Identity Management vendors and holds the CISSP designation. He is also a speaker at various conferences and an active member of several user communities.
Monday, October 19, 2009
As the momentum and understanding of BPM and SOA has increased, the projects have followed. IBM WebSphere Process Server and ESB (WPS/WESB) are common products that organizations start with when moving towards BPM/SOA/Web services. Many organizations are new to this type of SDLC. This discussion is in the context of my experience on WPS/WESB projects and certainly can be applied to other workflow and ESB products/technologies.
- Delay Data Model and Data Design efforts
- Plan integration validation between systems/apps/services scheduled toward the end of the project
- Assume a Sr. Developer with no experience on WPS/WESB will design/develop a functioning application
DO NOT plan integration validation between systems/apps/services toward the end of the development SDLC. On one recent project the customer was not familiar with WPS/WESB or integration projects in general. They planned all their integration testing toward the end of the SDLC in the Testing Phase. I am not saying integration testing shouldn’t be done in the testing phase but it should NOT be planned at the end of the SDLC. A common project plan will include a ‘Vertical Slice’, ‘Prototype’, ‘Wire-frame’ or whatever term you are familiar with, the has goal to validate the integration of the various systems early on in the SDLC.
DO NOT assume a Sr. Developer with no experience on WPS/WESB will design and develop a functioning platform. WPS/WESB are enterprise platforms that have multiple layers of technologies (e.g. Java, JEE, BPEL, WS, XML, XSLT, etc…). As a proud successful Sr Developer you may very well be able to create an application on these platforms that functions in non-production environment. However, there are number of nuisances that impact performance that should be addressed by design patterns depending on the requirements. Large business is one concern that comes to mind. Acceptable object size is dependent on business transaction volume, CPU Architecture, RAM, HEAP and other dependencies. Design patterns to deal with large business objects can be applied thus giving you better performance.
WPS/WESB is a product I’ve worked with extensively. It has seen major enhancements and improvements on usability/consumability but this doesn’t mean anyone can create a well functioning app.
Jonathan Machules first joined Prolifics as a Consultant, and is currently a Technology Director specializing in SOA, BPM, UML and IBM's SOA-related technologies. He has 12 years experience in the IT field — 2 of those years at Oracle as a Support Analyst and 10 years in Consulting. Jon is a certified IBM SOA Solution Designer, Solutions Developer, Systems Administrator and Systems Expert. Recent speaking engagements include IMPACT on SOA End-to-End Integration in 2007 and 2008, and SOA World Conference on SOA and WebSphere Process Server in 2007.
Wednesday, October 14, 2009
Rajiv Ramachandran, Practice Director, Enterprise Integration / Solution Architect
Having an integration infrastructure that connects all enterprise systems to one another and provides seamless and secure access to customers, partners, and employees is the foundation of a successful enterprise. I have been involved with a lot of customers discussing their EAI architectures. More often than not, I have noticed that the approaches considered to implement such an architecture are not complete and do not provide the benefits that may be achieved with a well connected enterprise. In this blog entry, I would like to highlight aspects that need to be considered to build an end-to-end integration solution. (Note: This blog entry will not get into the details on how to implement each of these areas, which would result in me writing a book J.)
- Connectivity – Avoiding point-to-point connectivity and ensuring that you have loosely coupled systems is key to ensuring that your EAI architecture is flexible and can scale.Use an ESB as the heart of you EAI architecture and ensure that your ESB has support for all major protocols (HTTP, SOAP, JMS, JCA, JDBC, FTP, etc.) and comes with adapters for common enterprise applications like SAP, Oracle, Siebel, PeopleSoft, etc.
- Patterns - Build a pattern based integration solution. The following is an excellent paper that outlines some of the common patterns used in the integration space:http://www.ibm.com/developerworks/library/ws-enterpriseconnectivitypatterns/index.html
- Data - Data is of great significance when it comes to integration. Different systems have different data formats and there are common items to consider that can help you deal with these differences:
- Define canonical data formats and ensure that you have mapping from application specific formats to canonical formats. Understand the various data formats that exist in your enterprise today and evaluate what it will take to map and manage complex data formats.
- EDI data is common in many enterprises and will require special handling.
- Define a strategy for handling reference data, how lookups can be done against this data and how reference data can be maintained.
- Define rules around both syntactic and semantic validation of messages. Do not over do validation as you will pay a price when it comes to performance. Be judicious in what you want to validate and where.
- Define your security requirements – authentication, authorization, encryption, non-repudiation, etc.
- Decide what aspect(s) can be supported by transport level security and when you need message level security.
- Decide where hardware components can be used to better implement security than software components.
- Use open standard protocols so that you can easily integrate with different systems – both internal and external.
This confluence of pattern based connectivity, data handling, monitoring, security, and service-oriented integration can provide you with a well-connected enterprise that can respond quickly to changing business needs.Rajiv Ramachandran first joined Prolifics as a Consultant, and is currently the Practice Director for Enterprise Integration. He has 11 years experience in the IT field — 3 of those years at IBM working as a developer at its Object Technology Group and its Component Technology Competency Center in Bangalore. He was then an Architect implementing IBM WebSphere Solutions at Fireman’s Fund Insurance. Currently, he specializes in SOA and IBM’s SOA-related technologies and products. An author at the IBM developerWorks community, Rajiv has been a presenter at IMPACT and IBM's WebSphere Services Technical Conference.
Tuesday, October 6, 2009
One of the buzzwords that followed the introduction of SOA was “Governance”. It was interesting to see how every aspect of a new project initiative now began to be tagged with this word. All of a sudden there were - project governance, architectural governance, infrastructure governance and so on. The real essence of what “SOA Governance” was or why “Governance” was important in the context of an SOA was lost.
I am not denying that governance is essential in every aspect of business and IT. But what I want to focus on this blog is about SOA Governance.
Services have been there all along in the technology space but the advancement in SOA and its adoption started when both customers and vendors came together to define a standard way to describe a service. It then became possible to implement this description in a programming language of choice, be able to deploy the service across diverse platforms and still be able to communicate across platform and language boundaries. With this form an SOA revolution, reusing services became much easier and with reuse came a unique set of challenges.
My business depends on service that I
- Did not write,
- Do not own,
- Cannot control who will make changes to it and when,
- Don’t know whether it will provide me with the qualities of service that I desire
What a SOA Governance model does, is bring uniformity and maturity in defining Service Ownership, Service Lifecycle, Service Identification & Definition, Service Funding, Service Publication & Sharing, Service Level Agreement etc. and thus provide a solution to otherwise what would have become a Service Oriented Chaos.
So next time when you talk about SOA Governance think about some of the above defined areas that pertain to an SOA and how you can align – Process, People and Products to achieve an SOA Governance solution that ensures that your SOA provides real business value
In the next set of blog entries I will focus on how IBM WebSphere Registry & Repository product helps with SOA Governance.
Rajiv Ramachandran first joined Prolifics as a Consultant, and is currently the Practice Director for Enterprise Integration. He has 11 years experience in the IT field — 3 of those years at IBM working as a developer at its Object Technology Group and its Component Technology Competency Center in Bangalore. He was then an Architect implementing IBM WebSphere Solutions at Fireman’s Fund Insurance. Currently, he specializes in SOA and IBM’s SOA-related technologies and products. An author at the IBM developerWorks community, Rajiv has been a presenter at IMPACT and IBM's WebSphere Services Technical Conference.