
SOA Integration: Technical View
This section will describe all the products and technologies that are available to complete your middleware architecture. This may include an LSB, Java EE application server and security infrastructure. Don't get overwhelmed by all the choices and technologies. Your specific business and technology drivers will quickly narrow down the set of products and technologies in your LSB and Java EE application server to a handful mentioned in this section. However, before you buy anything specific, it is important you understand the entire set of products and technologies that make up the LSB and Java EE application server. Otherwise, you run the risk of solving a specific use case but not building an agile middleware architecture that will meet both tactical and strategic business objectives.
The technical view adds aspects to the architecture that are very important from an implementation perspective: performance, scalability, reliability, security, management, end-to-end testing, and governance. It is not that these attributes are not important to the business community. Business users just assume that the IT infrastructure will be easy to use, will always be on, will run fast, are secure, and will run without errors.
Governance must be considered in terms of both design time (developing, controlling, and managing) and runtime (security, control, monitoring, and managing). Questions that are typically asked are: Who is running my services? How are they being used? What are the impacts to current business processes?
Something else that has not been discussed is the mainframe as both a provider and consumer of Web services. In most situations that we encounter, the requirements are that the legacy system will call a Web service (provider), and will also call Web services that are Java or .Net based on open systems (consumer). Your technical architecture should take into account both inbound and outbound Web services support (the legacy system can act as a Web service server, or a Web service client).
Sometimes, we are asked, "Do I really need an LSB or application server? Can't I just call my legacy Web services from an HTML page?". In other words, is an application server and/or SOA suite of products really needed? You could create a LSB without using a SOA product. However, if you thought your legacy architecture was a 'hair ball' to start with, it will now have more lines and points of entry crossing each other. Remember, the idea is to create an infrastructure that is less rigid and more agile then your current system.
The technical view consists of the four basic components from the functional view. Although this is represented as a two-dimensional model, it is actually a three dimensional model. The third dimension includes the aspects of functionality, performance, and governance that cuts through the three layers represented here. The complete set of technologies (once again, don't panic as you will not need any of these) looks like this:

User Interface
In the legacy world, the device and presentation layer are very tightly coupled, for example, a 3270 device displaying a 3270 presentation data stream. In the new world of IT, the user interface device and presentation layer are not tightly coupled, and the devices become much broader. The user device could be a PC, PDA, cell phone, thin client terminal or any device that supports HTTP, and a browser. The presentation layer could be a Java Applet, Java Server Page (JSP), or Java Server Faces (JSF), AJAX, PHP, or .NET application residing in a Portal. Since the device is not fixed, and new devices will emerge that did not exist five years ago (like smart phones and iPhones), the presentation-tier should be capable of display in any channel. This is often called multichannel delivery.
Legacy Service Bus (LSB) and Application Server
The LSB is a component of your middleware stack. Middleware can be anything from a crude, homegrown, internally developed XML data transfer and transformation, to an all-inclusive SOA Suite. Middleware in the open systems realm has advanced from HTTP servers, Java applications, Portals, data integration servers to a complete integrated stack of application server and SOA software. The term SOA Suite has become commonplace in the IT industry. An SOA Suite consists of the following components:

The interesting aspect of SOA Suites in the Legacy SOA Integration space is that some of the components can either reside on the open systems middle-tier, or on the mainframe-tier or on both. These components (as shown in the above picture) include BPEL, Security, Management, and the ESB (normally the queue). This is important because in some cases, it may make sense to do the BPEL process orchestration on the mainframe. So the processing is closer to the application and data. This will reduce network traffic and boost services response time. However, it could increase the MIPS usage of your mainframe, which is typically what you try to reduce.
Legacy Services Engine (LSE)
Talk to any of the legacy connectivity vendor and they will quickly avoid the conversation of gateways and screen scraping, and for good reason. Screen scraping is an old method of exposing the legacy presentation-tier as Web services. It relied on the field position (row/column), and the type of terminal stream. Both these restrictions make it very inflexible. A big problem occurs when a screen map changes. The application that receives the screen scraped stream is most likely to fail. Gateways are taboo because they imply heavy weight, and are difficult to manage products. Most legacy SOA companies will refer to their products as adapters or servers. We will use the term Legacy Services Engine (LSE) believing it incorporates everything that this engine does, all the way from mainframe connectivity to results processing.
The LSE is where the real action happens. This is where the bridge between the legacy system and the middle-tiers takes place. This engine also has features such as security, monitoring, caching, and management. As this is such an important and broad area, we will break this into LSE components, development, and implementation/deployment areas. We will start with the LSE Components.
We will cover the key LSE components in this section — the connectivity and processing engine for mainframe connectivity and information processing of mainframe screens, logic, and data.
The processing engine has a protocol server, SQL engine, XML parser, and transformation processer. The typical protocols supported are SOAP (HTTP and XML), JMS and Java EE Connector Architecture (Java EE CA). SOAP requires neither an adapter, nor a process on the middle-tier to handle the results on the client. When Java EE CA is used, you hear the term adapter being used. An adapter implies that there is a piece of software on the middle-tier to handle the processing of the results. The Java EE CA adapter will communicate through SOAP messages. JMS is not typically used in a synchronous online transaction environment.
This engine is where the results are processed, and the request sent back to the client. The client could be an HTML page, a BPEL process, or a Java application or some other process residing on the middle or client-tiers. As mentioned earlier, the engine needs to be both a provider and consumer of services. For a provider of Web services, the standard processes are as follows:
- Receive a service request from the client — The LSE has a protocol and request handler to receive the request from the client.
- Map the request to the appropriate legacy artifact — The request is then processed using the metadata repository to map the call to the correct mainframe call and do any data type conversion. If the request is a database call, the SQL engine will transform the call into the appropriate legacy data store access scheme.
- Call the appropriate legacy artifact — The application, transaction, program, stored procedure, 3270 screen, or a data store is called.
- Process the results — The results are returned to the processing engine. The metadata repository is used once again to perform any data transformations, data type mapping, or cleansing. The results are formatted into a SOAP response.
- Send the request back — The results are put back on the communication network and sent back to the client.
The SQL Engine is required for data-enabled services. The SQL engine will evaluate subqueries, joins, and aggregate functions when possible. It is best for these operations to happen closest to where the data is stored, so the engines will typically do this type of processing before sending the results set back to the client. The SQL engine has minimal work to do when a request is made via SQL, and the database source is an SQL compliant database engine such as IBM DB2. However, differences do exist despite vendor's compliance with industry standards such as SQL. Data type translation is typically more intensive as there are no industry standards on how data is stored. The situation gets exponentially more complicated when you bring non-relational data stores (such as, VSAM, IMS, IDMS, Adabas) into the mix. These databases do not support SQL natively and the database calls in the application are not likely to be SQL. The automated transformations required to solve this problem are fourfold:
- SQL translations — SQL translation takes into account the differences between Oracle's implementation of SQL and each participating non-Oracle databases data store call interfaces. In the case of DB2, the differences are minimal, and easy to discover. For nonrelational data stores, SQL is not the native calling method. So SQL will need to be translated into the native call. The SQL statement passed to the engine will look like this:
Exec :Bind-variable := value; Select column1 from table-name where id = :bind-variable;
The corresponding VSAM read statements looks like this:
MOVE value TO key1. READ file-name RECORD [INTO ws-field] INVALID KEY do something NOT INVALID KEY do something else END-READ.
- Data Dictionary translations — The data dictionary holds the information to map the source tables to the native files or record sets, the column/field mappings, the data type mappings, and other information necessary to transform the calling SQL to the native call interface. IT developers must have the capability to query the metadata of the remote, non-Oracle database in order to diagnose problems. In Oracle, the data dictionary is stored in an Oracle table. The data dictionary translation information on the mainframe is typically in a flat file, sequential file, or in a relational database.
- Indexing — Anyone who has done relational database SQL coding knows the importance of indexes. Have the right ones, and your application runs fast; don't have the right ones or have too many wrong ones, then performance suffers. Indexing must include the ability to create indexes on legacy non-relational sequential data sets such as the VSAM files. This means you can have direct indexed access, even to data that does not have a native index. Where native indexes exist, they should automatically be used to optimize query performance. This support becomes more important as you get more sophisticated with your SOA data sets and start joining relational and non-relational data sets. Joining data sets without indexes becomes painfully slow.
- Data type translation — Legacy data types must often be converted when viewing the data in a relational format. These data types need to be translated transparently. There are many data type translations that need to happen for date and time, floating point, pic x, and pic n fields. These data types map easily to relational data types. Three common data types in legacy environments that do not map easily to relational databases are: Comp-3 (packed decimal), redefines, and occurs. In COBOL, these data types look like this:
- Packed decimal
05 BALANCE-DUE PIC S9(6)V99 COMP-3.
- Redefines
05 a pic s9(4) comp-5. 05 a1 redefines a pic x(2).
- Occurs
05 MONTHLY-SALES OCCURS 12 TIMES PIC S9(5)V99.
The corresponding Oracle data types looks like this:
- Packed decimal
BALANCE-DUE DECIMAL(8,2);
- Redefines
a BINARY; a1 CHAR(2);
The redefines becomes two separate columns in the relational table.
- Occurs
MONTHLY-SALES-JAN NUMBER(7,2); . . . MONTHLY-SALES-DEC NUMBER(7,2);
The relational table has 12 columns, one for each month in the year. You will have 12 separate columns coming back in the resulting SOAP message.
- Packed decimal
Note
We don't want to turn this into an XML parsing and transformation book, but it is important to mention this briefly. XML processing — parsing and transformation — is both a memory-intensive and a processor-intensive activity. It is an important part of the LSE, as all service requests will come as XML streams, and all responses will go out as XML streams. Therefore, it is critical that the LSE has a highly performant XML engine.
The next components of the LSE we will discuss are often part of (although they don't need to be) the LSE. They can be separate processes that reside on in the middle-tier, on mainframe, one or the other, or even both. There are pros and cons to each of these configurations that we will discuss in the implementation/deployment section of this chapter. For now, we will review each of these components:
Orchestration supports execution of a business process. The business process can consist of a number of steps, transactions, decision points, and other business processes. Orchestration is really just a fancy word for traditional workflow. What is different is that orchestration engines have a common, standardized execution language and run time engine. Most workflow engines are built upon proprietary languages (T-SQL, PL/SQL, VB Script), and run in proprietary runtime engines. The execution language for orchestration is BPEL, and the runtime engine is Java. BPEL supports both human and automated orchestration. In Legacy SOA, BPEL is important because in many instances, the legacy services will be integrated with other Java, .NET, or REST based services that reside on open systems. For example, a web based shopping cart is mostly open systems based, but it needs to communicate with a mainframe based inventory management system to fulfill the order.
Security is a broad topic and includes everything from basic authentication and authorization, data encryption on the network to repudiation (you are who you say you are) and integration with mainframe security. Mainframe security includes, amongst others, Resource Access Control Facility (RACF) and Access Control Facility 2 (ACF2). Everyone thinks technologies such as RACF and ACF2 make mainframes secure, but in reality, they are secure because of their physical nature — the hardware is in one location and hosted in a locked, physically secure location, and all communication happens over a private network. RACF or ACF2 don't do anything to enhance this locked down hardware environment. In open systems, as the infrastructure and user community is geographically dispersed and public networks are used, the LSE should consider including these technologies:
- Java Authenticate and Authorization Service (JAAS): As the name implies, JAAS is for determining whether the user or device is authorized to access the service and has the correct privileges. The authentication and authorization information is typically stored in a Lightweight Directory Access Protocol (LDAP) server.
- Secure Sockets Layer (SSL) — SSL has become the defacto standard for securing HTTP traffic over the Internet. SSL is very simple to deploy, and all application server vendors support SSL traffic. In most cases, SSL as the security mechanism will do just fine.
- WS-Security —WS-Security is a set of standards for ensuring the integrity and confidentiality of data transmitted via Web services. The specific standards that are most important to your LSB are: WS-Policy, Security Assertion Markup Language (SAML), XML-Signature, and XML-Encryption. These standards are constantly evolving and changing, so the best place to find more information is OASIS (http://www.oasis-open.org/); the standards board responsible for WS-Security.
LSEs will also typically integrate with RACF and ACF2 since the services reside on the mainframe. Check with your LSE vendor to see what type of 'out of the box' mainframe security integration they provide.
Caching may seem like it should be discussed along with performance. Perhaps, but performance is a broad topic while caching is a specific technology that most LSE vendors use to increase end-to-end response time. Caching basically involves storing data in memory. The data may be from a database, a screen, a transaction or any other legacy artifact. An LSE that does not have some type of caching is not going to perform as well.
Just like end user interfaces, the latest trend in development environments is thin client, web-based. There is also a trend towards power users (tech savvy business users) being able to use the design tool on their own. A combination of these two methods would involves the end user using a web-based application to define his or her own Web services. It is up for you to decide which is the best LSE design/development environment for your company:
- Thick or thin — Thick clients (high powered PCs with heavy graphics) have more 'bells and whistles', and are usually much more graphic intensive. However, they require a more expensive workstation for the developer. Obviously, with thin (web-based) client tools, the development environment can be accessed from anywhere. To use the thick client, the user needs to have the PC or laptop with them, or have the PC software installed on all the machines they use. This can lead to a maintenance nightmare, as all workstations need to have the latest development software loaded on them.
- COBOL/Legacy or Java EE/open systems centric — This choice comes down whether you have more COBOL developers or more Java EE developers. Some tools emphasize that a COBOL developer can easily develop and deploy mainframe-based Web services. However, if all your developers know Java EE, or if the groups responsible for the middlware platform are all open systems developers, this may not be the best choice.
- Power users or developers — You may want your business users to define Web services on their own. They are ultimately the customers, and know what they want. However, they don't know the entire ramifications of the services they are creating, and the load they will put on the IT infrastructure. I am sure we have all heard of the horror stories when power users are allowed to write their own ad hoc database queries.
LSE implementation and deployment must take into consideration the LSE server location, the legacy artifacts, and the metadata repository. LSEs can all be run on the mainframe, run off the mainframe, or in some cases run with part of the server on the mainframe and part off the mainframe. In cases where the LSE is part of the CICS region, it will of course run on the mainframe. The legacy artifacts are the components on the mainframe that we are going to expose as services. The metadata repository contains the mapping and transformation information.
It is not just a matter of deciding to run the LSE on the mainframe or off the mainframe but also whether to run the process on the mainframe in a CICS region or not. For the most part, the vendor will decide this for you. Most vendors run outside the CICS region, but a few run in the CICS region.
- Within CICS region —The most obvious limitation here is that you must be accessing artifacts (CICS transactions, BMS maps) that are CICS-based. The advantages are that you are closer to where the processing is happening, and the process takes up less memory.
- Outside CICS —Outside the CICS region, you can perform any type of screen, transaction/application, or data SOA integration that you want.
- Off mainframe server —In some cases, it is not practical, or it is not allowed to run any new processes on the mainframe. Distributed Relational Database Architecture (DRDA) servers are processes that run off the mainframe and access DB2 on a mainframe server. Some LSE vendors architect their solution to be able to run off the mainframe.
- Hybrid —This is dependent on the vendor. Some of the LSE processes, which make sense running closer to the legacy artifacts, run on the mainframe and other LSE components run on open systems if most processing takes place there. BPEL is an example wherein it makes sense to run some of the processing — orchestration of CICS transactions — on the mainframe and then run it on open systems when integrating to Java EE or .NET Web services. Oracle has built in middleware JCA adapters for CICS, IMS/TM, IMS/DB, and VSAM. These adapters are actually OEMed technolgy from Attunity. Oracle also offers DRDA servers for access to DB2 on mainframes and iSeries/AS400. As they are DRDA servers, no footprint is required on the mainframe or the midrange server. Oracle partners with companies that offer LSE solutions: Attachmate, DataDirect, GT Software, Hostbridge, Seagull Software, Micro Focus, and Treehouse. These companies have solutions that either compliment what Oracle offers out of the box, or provides support for presentation or data-tier not offered directly by Oracle such as 3270, 5250, Natural, Adabas, Datacom, and others.
Legacy artifacts are the actual pieces of logic, screen, or data that reside on, and are processed on the mainframe. The legacy artifact is what the business users want to get to. Since these artifacts were already described in detail earlier in the chapter, we will focus on why you would chose one Legacy artifact/access method over another:
- Presentation-tier — This refers to mainframe 3270 or VT220 (DEC) transmissions, iSeries transmissions (5250), and others. Remember, this is not terminal emulation and screen scraping. These techniques were inflexible and tightly coupled to the device. This solution uses the actual field name in the screen map.
Why choose presentation-tier over applications, data, and others? The simple answer is that none of the application source is available. Other reasons may be that the data stores cannot be accessed directly because of security or privacy restrictions, or that no stored procedures or SQL exist in the application. SOA enablement of the mainframe application is as simple as running and capturing the screens, menus, and fields you want to expose as services. This is fast, simple, and for the most part, easy.
- Application — Application service enablement is more than wrapping transactions as Web services. This is all about service enabling the behavior of the system, and includes CICS/IMS transactions, Natural transactions, IDMS and ADS/O dialogs, COBOL programs, and batch processes. This also includes the business rules, data validation logic, and other business processing that are part of the transaction.
Why application based legacy SOA? The application is at the core of most systems. The application contains the screens that are run, the business logic, business rules, workflow, security and the overall behavior of the legacy systems. Transactions on mainframe systems are the way that IT users interact with the system. So using the application layer makes the most sense when you want to replicate the functionality that the legacy system is currently using. This approach allows you to leverage all the behaviors (rules, transaction flow, logic, and security) of the application without having to re-invent it on open systems.
- Data — The data can be relational or nonrelational on the legacy system. In most cases, the legacy system will have a nonrelational data store such as a keyed file, network database, or a hierarchical file system. While accessing data in a legacy system, the SOA Integration layer will use SQL to provide a single, well-understood method to access any data source. This is important as some customers will prefer to have SQL-based integration as opposed to SOA-based data integration. The IT architect may decide that having a SQL statement in an open systems database is easier than putting in place an entire SOA infrastructure.
Why data? This is ultimately the source of the truth. This is where the information that you want is stored. If you are using any of the other three artifacts, these methods will ultimately call the data store. So, it seems very reasonable that most of your services act directly against the data. Sometimes, security and privacy concerns will not make this possible. Sometimes, the data needs to have business logic, business rules, or transformation applied before it is valid. However, if these things are not applicable, going right to the data source is a nice way to proceed.
- Other — Stored procedures and SQL are the way most distributed applications get results from a data store. Stored procedures also provide major benefits in the areas of application performance, code re-use, application logic encapsulation, security, and integrity. Why stored procedures and SQL? You are making a move to distributed, open systems, and relational databases so you should use technologies that work well in this environment. There is also the people and skills aspect. Your open systems developers will be very familiar with stored procedures, and will find them easy to develop.
Note
Keeping It Real: An architect who is well experienced in SQL access and relational databases may be tempted to perform Legacy SOA Integration using direct access (SQL instead of Web services) to the mainframe data. There are a couple of fundamental flaws with this approach:
In some cases, the programs may not be available, and the legacy system owners may not allow direct access to the data, or the data does not exist in a clean format.
In many instances, mainframe data is not readily usable until the appropriate business logic or business rule transforms it into meaningful information. On a mainframe, the business rules and business logic are frequently implemented as a program, or as part of a transaction.
In these cases, the user interface or application approach is the one to take for your SOA architecture.
The various combinations of products being used will result in multiple repositories. The objective is to have the least amount possible, and the ones you do should be able to share information with each other. We could have one for LSE, BPEL, ESB, application server, security, services management, and other products that are part of the middleware will have their own repositories. At a minimum, you will have two repositories, one for the application server and one for the LSE. If the best-of-breed approach is taken, you could end up with a handful of repositories.
Other Technical and Business Aspects
The third dimension mentioned earlier will be discussed across all of the components just covered. This dimension is a combination of technical, business, and human nature — yes, human nature in a technical book.
Scability is important because legacy personnel are accustomed to mainframe technologies such as Sysplex and CICSplex, which provide scalability on the mainframe. Some questions to ask an LSE vendor are: Does the engine take advantage of Sysplex and/or CICSplex? Can the engine be distributed across multiple mainframe LPRs?
Usually, performance is determined by the developer's application code. You can tune the operating system, network, and database, but the performance issues will usually come down to an SQL statement, or a poorly written piece of Java code. In the case of SOA Legacy Integration, the biggest areas of concern are not just the developer's application code, but also the performance of the LSE. On the developer side, the results coming back (is there a qualifier — where clause — on data results set) and the number of requests to the legacy system to get all data required are the biggest areas of concern. Having course grained legacy services, instead of fined grained services, can control the number of requests. Course grained (service bundled together) services will eliminate unnecessary network traffic, and back and forth 'chatter' with the legacy system. The performance of the LSE can be addressed through caching, SQL optimization, and high-speed XML processing. These are questions you will need to ask of your potential LSE vendors. Workload Management (WLM) and load balancing can also help.
If your application server and/or LSE goes down, this can cost you large sums of lost revenue, lost credibility with your business users, negative impact on your entire IT operation, and more. Therefore, application server and LSE failover is key to the success of your SOA Legacy Integration architecture. The simple approach, and actually the one most commonly used in mainframe environments, is to have a duplicate system configured at another location. However, this can be expensive, difficult to manage, and take hours to bring the remote site online. This is an area where open systems have made advances because Internet-based applications are 24X7. Most LSB vendors offer the capability to automatically and transparently fail over to another configuration. The vendors also offer the capability to do active-passive or active-active. In active-passive, the failover configuration will do a cold failover. In this case, the environment will need to start-up at the time of failover, and the transactions in process will be required to return. In active-active, a hot failover to a configuration that is up-to-date with all the transactions in progress will be made. Therefore, the transactions will automatically be restarted, and in some cases, picked up where they were left off.
When you first embark on your journey to Legacy SOA Integration, it is quite likely that most services will not require transactions processing (insert, update, and delete). They will be query or read only. As Legacy SOA matures, we will begin to see more OLTP-based legacy SOA environments. The next phase of SOA Legacy architecture requires high-speed transaction processing across multiple systems, mainframes, mid-range, and open system environments.
While Web services can be based on any transport, they typically use stateless HTTP communication. In addition, not all data sources can support the common commit procedures. For example, an Adabas data source is capable of supporting only one-phase commit, and cannot participate in a two-phase commit coordinated infrastructure. While these would seem to limit transaction support in a SOA integration environment, they are not really a concern as long as the underlying application server and LSE have transactions support. One method of transactions support is the use of the industry standard XA (eXtended Architecture). XA is a very mature distributed transaction manager. Some would argue, just because it is mature does not make it the right technology to use. They see OASIS Web Services Transaction (WS-TX) or vendor-lead WS-Transaction as better because they are Web services centric. However, as of today, XA is supported in all transaction processors (CICS, Tuxedo), application servers (Oracle, BEA, IBM), and database servers (Oracle, IBM, Microsoft). As we will see in the next chapter, XA transactions can be used to support the transaction needs of your SOA Integration architecture.
SOA governance is about managing the portfolio of services, planning development of new services and updating current services, managing the service lifecycle, using policies to restrict behavior, and monitoring performance. SOA Governance is spoken about by industry pundit as being very important to SOA architecture. In reality, when you are just starting off and creating internal Web services, or read-only external services, the need for governance is limited. You will have less then twenty services, so there is no need for a sophisticated service management tool. When you create a new version, or release an existing service, you will probably just delete the old service and put the new service into production. No need for a crazy scheme to make sure all existing transactions use the old service, and new requests use the new services. When you get more services and many consumers of your legacy Web services, something like version or change management control will be important. What if all your customers (Web services consumers) cannot upgrade to the latest release of your Web services? You will need SOA Governance products to handle requests for different versions of the same services from different clients. Monitoring performance is something that should be implemented immediately.
The human factor is a mix of the type of resources, company culture, and adaptability. What type of resources do you have? Are they mostly proficient at Java/.Net, COBOL, JCL, or some other language? Do they work on UNIX, Linux, Windows, or z/OS? These variables play a heavy role in the implementation process and target Oracle architecture.
People don't like change. Don't think your older COBOL programmers have a corner on the "I don't like change market". I have been involved in BEA to Oracle Application Server, and Sybase to Oracle Database migrations, with younger developers and DBAs that resist change. They say, "I am a Sybase DBA", and not "I am a DBA". Well, any form of Legacy Modernization involves change, and in most cases big time change. Both the company culture and the adaptability of the people involved can make or break a project. Over the years of managing IT projects, it has become obvious that technology does not make or break an IT project, but people and company culture do.
We saw how systems developed over 30 years became fragile with different people, and constantly changing business initiatives, technologies and people. With limited resources of money, time, and people, we end up developing an architecture that is not agile. What staggers me is the number of companies that, given a chance to start fresh with modern technologies, develop Legacy SOA Integration architecture that are not flexible, are cobbled together with nonintegrated components, use proprietary technologies, and are different across organizations within the same company or division.
It is important to understand the extent of support that the LSE vendor has for operating systems, presentation-tiers, transactions, languages and databases that are supported. Common operating systems support includes IBM zSeries (S/390), IBM iSeries (AS/400), UNIX, and Unisys. Not so commonly supported operating systems include IBM VSE, HP OpenVMS, and HP e3000. Before you choose an LSE product, you must make sure it supports the legacy presentation-tier, data, transactions, and/or languages that you have. Some may support only the presentation-tier, or only the CICS transactions. Keep in mind all the different combinations of host support as we decide on the implementation options in the next section.