Preface

Architecture in the development of a software system it is not unlike the architecture of a building; encompassing all the theoretical and ideal practices used in engineering and construction. Tempered with the reality of programming languages and operating systems, it becomes the design framework to support a complete life-cycle process.  Careful planning and prototyping of model systems must take place to be successful adopting a proposed architecture. Therefore, documenting the proposed architecture serves a purpose like writing formal requirements and specification papers.

The building architect works with bricks, wood, glass, and steel in mind to design buildings in various styles; the software architect works with an intangible mix of code, executables, computer processors, and network protocols. Software architecture relates to how the software will be put together rather than what it must accomplish, however the business domain must be considered in architecting a software system. The true value of what is built is actually in the "space it encloses" and any particular functionality can be transcended with a unique genre of work; the target architecture: A statement that encompasses how developers will work to produce a proposed software system using currently available technologies with respect to style, convention, overall appearance, and structure.

In our case, the target architecture is a view of how computer applications can be constructed using object oriented analysis and design, high level programming languages, relational databases, and web based technologies. The proposal is to use a proven combination of tools and technologies to produce a complete and consistent system of development. A cohesive team using this common set of technologies across the board produces a code base that is more portable and easier to maintain.

The n-tier paradigm

N-tier systems may use multiple computing platforms, network technologies, and portable software components; this physical distribution and/or logical divisions of the application's parts is what differentiates this paradigm from monolithic software systems. Models from industry leaders to standardize architecting and engineering these systems have gained much notoriety, however experience has shown us that what actually takes place in any one enterprise becomes quite proprietary. From our point of view a client/server application would be a 2-tier system and a typical web application would be at least a 3-tier system.

Since software artifacts are often described with terms such as "layered", "structured", "vertical", "top-level", etc.; the notion of tiers fits quite well, for example picturing the tiers as the floors of a building where different departments reside, with the elevators and stairs as the "interfaces" between floors. In other words, the third floor cannot be accessed without going past the second floor. Outside of that box, some tiers may break that view, spanning several floors like an atrium. The term "tier" is somewhat of a misnomer, in that those tiers are better thought of as "departments" or "divisions" where some sort of work is done, and that there are means of communication and work flowing between those divisions.

Division of work - the tiers

A key premise of an n-tier system is the division of work into specialized areas of data storage mechanisms, business logic, and display handlers. How the interfaces between tiers are designed and implemented is also paramount. These logical divisions can make up physical divisions of components and modules as we construct them in the target environment. In other words, what is expected to go on in each of those tiers and where the components physically reside is part of the architecture. More code reuse is facilitated by using specialized and modular components that can be shared among different applications.

An often overlooked benefit of n-tier systems is the division of work for the developers, tech writers, and other people directly involved in building the software. This allows specialists to focus on their area of expertise producing more quality code and documentation than if their work is overly diversified. Developing skills that cross over the interface boundaries is more manageable and desirable than attempting to develop skills that span the entire system. That does not preclude doing "vertical development" where each developer or team is responsible to work in all tiers to accomplish their work. If you were to organize the work in a matrix where the horizontal represents the tiers and the vertical represents the use cases, each intersection becomes a logical unit of work; i.e. if you had a use case called "Online Store" you could assign resources to work on the "Online Store - Business Logic" piece.

Relational databases

Using only some basic features of relational database technology to store the persistent data produced and required by an application, we look at some general design considerations. Most tables are created with a single primary key id column, and not more than one unique key that may be a composite key (multiple columns). Foreign keys are used to maintain referential integrity where appropriate and supporting implied relationships between tables. Unique and non-unique indexes may be added to optimize queries or search conditions. Stored procedures and triggers will be used as well, however only "canonical rules" will typically be supported here; rules that support data constraints. The "business rules" will reside predominately in the business logic layer.

The relational database will be analyzed to drive the creation of a metadata model which becomes the basis for the generation of the data access objects, stored procedures, and code that interfaces the data access object with the database; the data access layer.

Data access layer

Data access will be considered as an interaction between a data access tier component and the enterprise data store (a relational database system). The mechanics involves marshalling, buffering and conversions of data. The semantics include connections, commands, and authorizations. It is important to separate the key elements into distinct objects to produce the most generic solution with interfaces that will remain stable and be updated even when the target database schema changes.

Data Access Objects are designed to objectify a relational database as objects mapped to data rows and provide access to that database through a their methods and properties. These objects are required to work in a "stateless" environment (i.e. without a constant connection to the database) and provide basic data validation, concurrency control, and error handling features. Residing closest to the database in an n-tier application they loosely couple their client components to the data access layer and therefore the database. Implemented by using code generation and providing for hand coded properties and methods to be included. These objects have a generated portion which provides the canonical functions for database access (i.e. select, insert, update, and delete) as well as supporting entity relationships, unique index lookups, and object collections. The hand coded portions addresses special features, data storage structures, parent / child relationships, or client requirements not provided by the generated portion. Various native language forms are provided through the generation process for unit test and debugging during development.

Presentation Handlers and User Interface components can use these objects directly and produce the necessary code for data clients, however the preferred method is to produce business layer components that add specialization for particular functional areas.

Business logic tier

Business Logic Tier components contain functionality to support the use cases described by the business and functional requirements. Typically these components will use the data access objects and avoid direct interaction with the database via record sets or other data provider abstractions. That convention encapsulates and optimizes all access to the data store through the data access layer and insulates the application programmer from using SQL directly. Since the data access tier is primarily generated code this also removes a lot of mundane work from the developers workload, allowing more time to be spent solving business problems. This convention does not preclude one from using other means to access the database, however that is crossing the "NO SQL" line in the target architecture.

Presentation handlers

The advent of browser based clients has provided a wider range of options with respect to where "client side" processing logic occurs. A need for "light weight" display logic, like the paradigm used in terminal hosted environments, is one driver for using presentation handlers . This tier is where a specific UI implementation is addressed and is coupled to the UI component's capabilities. In its simplest form, this is just a logical division of work for the UI programmer where long and laborious routines are separated from the code to display and capture information from the user.

Reporting functions may also use presentation handlers as they represent another type of "display". Whether printed on paper or viewed within an electronic document system they are essentially a "one-way" user interface.

User interfaces

For our purposes, Windows® is where it's at. A strong point for using Windows based desktop applications has always been the graphical user interfaces you can develop. Now with Visual Studio .NET development in VB or C# it is better than ever, easing the production of web browser based interfaces for applications too. If the business layer is designed to be neutral in regard to how the user interfaces are implemented, the option of using it in either desktop or web based development is possible.

If you are considering desktop executables, you must keep in mind the deployment of such components across the enterprise or to stand-alone customers. If designed as a web application the system will likely have both internal and external components. Customers and remote offices clearly benefit from, and require, the use of Internet accessible systems. In-house users would have intranet access to the web based applications as well as "native channel" systems; on a LAN or VPN.


Work flow engine

Depending on the nature of the work, the application may have some types of automation built in or it may be all about automation. Initially, conventions on sequencing a set of tasks to complete a job, turns into a component that interacts with business logic or presentation handler components. It makes decisions and sends messages to its clients on how it is proceeding given some known criteria. Not to be confused with the flow of work through the business, this is to control the flow of work that the system performs based on user interaction, screen navigation, button clicks, menu choices, data entry, and so on. This may be an application or service that monitors certain data or other parts of the system and/or reacts to some type of programmatic messaging.


Object-relational mapping

The foundation of the target architecture is using a relational database and an object-oriented data access layer, understanding object-relational mapping is a key to implementing it. Object pundits will lament over the omission of encapsulation, polymorphism, and inheritance, however let's just say that objects contain data attributes (properties) and functions (methods) to do things. So, in a nutshell, if the persistent data is in RDBMS tables, the proposed architecture must in some way objectify the relational database. In order to interoperate within an application there must be some commonality across all objects, especially the objects considered being persistent objects. That is best handled through inheritance that requires those common attributes and operations to exist in all the objects that comply with the enterprise's architectural model. With that proprietary object architecture comes proprietary object collections. Custom object collections contain sets of objects or references to objects and provide operations to iterate, sort, and modify those sets.

On the surface many applications are still simply dealing with records and files, leveraging the power of relational databases, computer networks, and modern telecommunications. Since the advent of object oriented programming, technologies developed around interoperable objects and interfaces resulting in the emergence of several de-facto standards. Along with all that is good about object-orientation, adding functionality to a basic unit of data, the record (data row), is a very compelling factor of why object orientation is now the mainstay of application development. Supporting inheritance within the relational model can be implemented in several ways, however that is beyond the scope of this document.

Enterprise system transactions

In the business world the concept of a transaction is rooted in the need to come to terms in the exchange of goods or services and the actions that will encompass that exchange. In software systems the concept of a transaction often manifests itself as a literal programmatic construct. Enterprise software systems have a finite set of transactions that may occur between end users and sub-systems. Identifying and naming those transactions is often an implicit part of developing the requirements and specifications for a particular product or system. Interfaces to those transactions are often published so that each party involved in the transaction can participate in the exchange without a dialogue. This is the lowest common denominator of attributes and operations, which must be bundled together and understood by both parties in order for the transaction to be attempted. Controls are in place to make sure that each arty is satisfied to successfully complete the transaction.

In a connectionless environment the implementation of a transaction differs somewhat from a transaction in a terminal hosted or client server environment. Characterized by at least two separate communications of messages and typically paired in a request / response scenario that establish the commit boundaries. The system has to accommodate both favorable and adverse outcomes since there is a possibility that the exchange will not be completed. Transaction monitoring technologies increase the reliability of transaction-based systems and provide a platform for implementing them, however the definition of particular types of transactions remains a domain specific task. If each party involved knows what to expect, the transaction can be completed more quickly than if a dialogue must occur before attempting an exchange. Enterprise servers often use listener programs or web services that will handle predefined sets of messages. Clients know enough about the listeners or services available to establish communication, send a message to the server, and then wait for a reply. This asynchronous exchange is considered a transaction and the contents of the reply will indicate the success or failure of said transaction, so that the client may act appropriately.