Object-Oriented Techniques

Object-Oriented Techniques

What Is Object-Oriented Analysis?

Object-oriented analysis has become a key issue in today’s analysis paradigm. It is without question the most important element of creating what may be called the “complete” requirement of a system. Unfortunately, the industry is in a state of controversy about the approaches and tools that should be used to create object systems. This chapter will focus on developing the requirements for object systems and the challenges of converting legacy systems. Therefore, many of the terms will be defined based on their fundamental capabilities and how they can be used by a practicing analyst (as opposed to a theorist!).

Object orientation (OO) is based on the concept that every requirement ultimately must belong to an object. It is therefore critical that we first define what is meant by an object. In the context of OO analysis, an object is any cohesive whole made up of two essential components: data and processes.

Classic and even structured analysis approaches were traditionally based on the examination of a series of events. We translated these events from the physical world by first interviewing users and then developing what was introduced as the concept of the logical equivalent. Although we are by no means abandoning this necessity, the OO paradigm requires that these events belong to an identifiable object. Let us expand on this difference using the object shown in Figure 11.1, an object we commonly call a “car.”

The car shown in Figure 11.1 may represent a certain make and model, but it also contains common components that are contained in all cars (e.g., an engine). If we were to look upon the car as a business entity of an organization, we might find that the three systems shown in Figure 11.2 were developed over the years.

Figure 11.2 shows us that the three systems were built over a period of 11 years. Each system was designed to provide service to a group of users responsible for particular tasks. The diagram shows that the requirements for System 1 were based on the engine and front-end of the car. The users for this project had no interest in or need for any other portions of the car. System 2, on the other hand, focused on the lower center and rear of the car. Notice, however, that System 2 and System 1 have an overlap. This means that there are parts and procedures common to both systems. Finally, System 3 reflects the upper center and rear of the car and has an overlap with System 2. It is also important to note

clip_image002

Figure 11.1 A car is an example of a physical object.

that there are components of the car that have not yet been defined, probably because no user has had a need for them. We can look at the car as an object and Systems 1 to 3 as the software which has so far been defined about that object. Our observations should also tell us that the entire object is not defined and more important, that there is probable overlap of data and functionality among the systems that have been developed. This case exemplifies the history of most development systems. It should be clear that the users who stated their requirements never had any understanding that their own situation belonged to a larger composite object. Users tend to establish requirements based on their own job functions and their own experiences in those functions. Therefore, the analyst who interviews users about their events is exposed to a number of risks:

• Users tend to identify only what they have experienced, rather than speculating about other events that could occur. We know that such events can take place, although they have not yet occurred (you should recall the discussion of using STDs as a modeling tool to identify unforeseen possibilities). Consider, for example, an analysis situation in which $50,000 must be approved by the firm’s Controller. This event

clip_image004

Figure 11.2 This diagram reflects the three systems developed to support the car object.

might show only the approval, not the rejection. The user’s response is that the Controller, while examining the invoices, has never rejected one and therefore no rejection procedure exists. You might ask why. Well, in this case the Controller was not reviewing the invoices for rejection but rather holding them until he/she was confident that the company’s cash flow could support the issuance of these invoices. Obviously, the Controller could decide to reject an invoice. In such a case, the software would require a change to accommodate this new procedure. From a software perspective we call this a system enhancement, and it would result in a modification to the existing system.

• Other parts of the company may be affected by the Controller’s review

of the invoices. Furthermore, are we sure that no one else has automated this process before? One might think such prior automation could never be overlooked, especially in a small company, but when users have different names for the same thing (remember Customer and Client!) it is very likely that such things will occur. Certainly in our example there were two situations where different systems overlapped in functionality.

• There will be conflicts between the systems with respect to differences in data and process definitions. Worst of all, these discrepancies may not be discovered until years after the system is delivered.

The above example shows us that requirements obtained from individual events require another level of reconciliation to ensure they are complete. Requirements are said to be “complete” when they define the whole object. The more incomplete they are, the more modifications will be required later. The more modifications in a system, the higher the likelihood that data and processes across applications may conflict with each other. Ultimately this results in a less dependable, lower quality system. Most of all, event analysis alone is prone to missing events that users have never experienced. This situation is represented in the car example by the portions of the car not included in any of the three systems. System functions and components may also be missed because users are absent or unavailable at the time of the interviews, or because no one felt the need to automate a certain aspect of the object. In either case, the situation should be clear. We need to establish objects prior to doing event analysis. The question is how?

Before we discuss the procedures for identifying an object, it is worth looking at the significant differences between the object approach and earlier approaches. This particular example was first discussed with a colleague, Eugene O’Rourke, on the generations of systems and how they compare to the object methodology: The first major systems were developed in the 1960s and were called Batch, meaning that they typically operated on a transaction basis. Transactions were collected and then used to update a master file. Batch systems were very useful in the financial industries, including banks. We might remember having to wait until the morning after a banking transaction to see our account balance because a batch process updated the master account files overnight. These systems were built based on event interviewing, where programmer/analysts met with users and designed the system. Most of these business systems were developed and maintained using COBOL.

In the early 1970s, the new buzz word was “on-line, real-time” meaning that many processes could now update data immediately or on a “real-time” basis. Although systems were modified to provide these services, it is important to understand that they were not reengineered. That is, the existing systems, which were based on event interviews, were modified, and in many cases portions were left untouched.

In the late 1980s and early 1990s the hot term became “client/server.” These systems, which will be discussed later, are based on sophisticated distributed systems concepts. Information and processes are distributed among many Local and Wide Area Networks. Many of these client/server systems are re-constructions of the on-line real-time systems which in turn were developed from the 1960s batch systems. The point here is that we have been applying new technology to systems that were designed over 30 years ago without considering the obsolescence of the design.

Through these three generations of systems, the analyst has essentially been on the outside looking in (see Figure 11.3). The completeness of the analysis was dependent upon—and effectively dictated by—the way the inside users defined their business needs.

clip_image006

Figure 11.3 Requirements are often developed by analysts from an outside view. The specifications are therefore dependent on the completeness of the user’s view.

Object-orientation, on the other hand, requires that the analyst have a view from the inside looking out. What we mean here is that the analyst first needs to define the generic aspects of the object and then map the user views to the particular components that exist within the object itself. Figure 11.4 shows a conceptual view of the generic components that could be part of a bank.

Figure 11.4 shows the essential functions of the bank. The analyst is on the inside of the organization when interviewing users and therefore will have the ability to map a particular requirement to one or more of its essential functions. In this approach, any user requirement must fit into at least one of the essential components.. If a user has a requirement that is not part of an essential component, then it must be either qualified as missing (and thus added as an essential component) or rejected as inappropriate.

The process of taking user requirements and placing each of their functions into the appropriate essential component can be called mapping. The impor- tance of mapping is that functions of requirements are logically placed where they generically belong, rather than according to how they are physically implemented. For example, suppose Joseph, who works for a bank, needed to provide information to a customer about the bank’s investment offerings. Joseph would need to access investment information from the system. If OO methods were used to design the system, all information about banking investments would be grouped together generically. Doing it this way allows authorized personnel to access investment information regardless of what they do in the bank. If event analysis alone was used, Joseph would probably have his own subsystem that defines his particular requirements for accessing investment information. The problem here is twofold: first, the subsystem does not contain all of the functions relating to investments. Should Joseph need additional information, he may need an enhancement or need to use someone else’s system at the bank.

clip_image008

Figure 11.4 Using the object approach, the analyst interviews users from the inside looking out.

Second, Joseph’s subsystem may define functions that have already been defined elsewhere in another subsystem. The advantage of OO is that it centralizes all of the functions of an essential component and allows these functions to be “reused” by all processes that require its information. The computer industry calls this capability Reusable Objects.

Identifying Objects and Classes

The most important challenge of successfully implementing OO is the ability to understand and select Objects. We have already used an example which identified a car as an object. This example is what can be called the tangible object, or as the industry calls them “physical objects.” Unfortunately, there is another type of object called an “abstract” or intangible object. An intangible object is one that you cannot touch or as Grady Booch describes: “something that may be apprehended intellectually ... Something towards which thought or action is directed.”22 An example of an intangible object is the security component of the essentials of the bank. In many instances OO analysis will begin with identifying tangible objects which will in turn make it easier to discover the intangible ones.

Earlier in the book, we saw that systems are comprised of two components: Data and Processes. Chapter 6 showed how the trend of many database products is toward combining data and processes via stored procedures called triggers. Object orientation is somewhat consistent with this trend in that all objects contain their own data and processes, called attributes and services, respectively. Attributes are effectively a list of data elements that are permanent components of the object. For example, a steering wheel is a data element that is a permanent attribute of the object “Car.” The services (or operations), on the other hand, define all of the processes that are permanently part of or “owned” by the object. “Starting the Car” is a service that is defined within the object Car. This service contains the algorithms necessary to start a car. Services are defined and invoked through a method. A method is a process specification for an operation (service).23 For example, “Driving the Car” could be a method for the Car object. The “Driving the Car” method would invoke a service called “Starting the Car” as well as other services until the entire method requirement is satisfied. Although a service and method can have a one-to-one relationship, it is more likely that a service will be a subset or one of the operations that make up a method.

Objects have the ability to inherit attributes and methods from other objects when they are placed within the same class. A class is a group of objects that have similar attributes and methods and typically have been put together to

clip_image011

Figure 11.5 Class Car Transmissions.

perform a specific task. To further understand these concepts, we will establish the object for “Car” and place it in a class of objects that focuses on the use of transmissions in cars (see Figure 11.5).

Figure 11.5 represents an object class called Car Transmissions. It has three component objects: cars, automatic trans, and standard trans. The car object is said to be the parent object. Automatic trans and standard trans are object types. Both automatic trans and standard trans will inherit all attributes and services from their parent object, Cars. Inheritance in object technology means that the children effectively contain all of the capabilities of their parents. Inheritance is implemented as a tree structure24; however, instead of information flowing upward (as is the case in tree structures), the data flows downward to the lowest level children. Therefore, an object inheritance diagram is said to be an inverted tree. Because the lowest level of the tree inherits from every one of its parents, only the lowest level object need be executed, that is, executing the lowest level will automatically allow the application to inherit all of the parent information and applications as needed. We call the lowest level objects concrete, while all others in the class are called abstract. Objects within classes can change simply by the addition of a new object. Let us assume that there is another level added to our example The new level contains objects for the specific types of automatic and standard transmissions (see Figure 11.6).

The above class has been modified to include a new concrete layer. Therefore, the automatic trans object and standard trans object are now abstract. The new four concrete objects not only inherit from their respective parent objects, but also from their common grandparent, cars. It is also important to recognize that classes can inherit from other classes. Therefore, the same example could show each object as a class: that is, cars would represent a class of car objects and automatic trans another class of objects. Therefore, the class automatic trans would inherit from the cars class in the same manner described above. We call this class inheritance.

We mentioned before the capability of OO objects to be reusable (Reusable Objects). This is very significant in that it allows a defined object to become part of another class, while still keeping its own original identity and independence. Figure 11.7 demonstrates how Cars can be reused in another class.

clip_image013

Figure 11.6 Class-Car Transmission types.

clip_image015

Figure 11.7 Class: Transportation Vehicles

Notice that the object Car is now part of another class called Transportation Vehicles. However, Car, instead of being an abstract object within its class, has become concrete and thus inherits from its parent, Transportation Vehicles. The object Cars has methods that may execute differently depending on the class it is in. Therefore, Cars in the Transportation Vehicle class might interpret a request for “driving the car” as it relates to general transportation vehicles. Specifically, it might invoke a service that shows how to maneuver a car while it is moving. On the other hand, Cars in the Transmission class might interpret the same message coming from one of its children objects as meaning how the transmission shifts when a person is driving. This phenomenon is called polymorphism. Polymorphism allows an object to change its behavior within the same methods under different circumstances. What is more important is that polymorphism is dynamic in behavior, so its changes in operation are determined when the object is executed or during run time.

Because objects can be reused, keeping the same version current in every copy of the same object in different classes is important. Fortunately, objects are typically stored in dynamic link libraries (DLL). The significance of a DLL is that it always stores the current version of an object. Because objects are linked dynamically before each execution, you are ensured that the current version is always the one used. The DLL facility therefore avoids the maintenance nightmares of remembering which applications contain the same subprograms. Legacy systems often need to relink every copy of the subprogram in each module where a change occurs. This problem continues to haunt the COBOL application community.

Another important feature in object systems is instantiation and persistence. Instantiation allows multiple executions of the same class to occur independent of another execution. This means that multiple copies of the same class are executing concurrently. The significance of these executions is that they are mutually exclusive and can be executing different concrete objects within that class. Because of this capability, we say that objects can have multiple instances within each executing copy of a class to which it belongs. Sometimes, although class executions are finished, a component object continues to operate or persist. Persistence is therefore an object that continues to operate after the class or operation that invoked it has finished. The system must keep track of each of these object instances.

The abilities of objects and classes to have inheritance, polymorphic behavior, instantiation and persistence are just some of the new mechanisms that developers can take advantage of when building OO systems.25 Because of this, the analyst must not only understand the OO methodology, but must also apply new approaches and tools that will allow an appropriate schematic to be produced for system developers.

Object Modeling

In Chapter 5, we discussed the capabilities of a state transition diagram (STD) and defined it as a tool useful for modeling event driven and time dependent systems. A state very closely resembles an object/class and therefore can be used with little modification to depict the flow and relationships of objects. There are many techniques available such as Rumbaugh’s Object Modeling Technique (OMT) and Jacobson’s Object-Oriented Software Engineering (OOSE) that can also be applied. However, be careful, as many of the methodologies are very complex and can be overwhelming for the average analyst to use in actual practice.

The major difference between an object and a state is that an object is responsible for its own data (which we call an attribute in OO). An object’s attributes are said to be encapsulated behind its methods, that is, a user cannot ask for data directly. The concept of encapsulation is that access to an object is allowed only for a purpose rather than for obtaining specific data elements. It is the responsibility of the method and its component services to determine the appropriate attributes that are required to service the request of the object. For this reason, object relationships must include a cardinality definition similar to that found in the ERD. An object diagram, regardless of whose methodology is used, is essentially a hybrid of an STD and an ERD. The STD represents the object’s methods and the criteria for moving from one object to another. The ERD, on the other hand, defines the relationship of the attributes between the stored data models. The result is best shown using the order processing example contained in Figure 11.8.

The object diagram in Figure 11.8 reflects that a customer object submits a purchase order for items to the order object. The relationship between customer and order reflects both STD and ERD characteristics. The “submits purchase order” specifies the condition to change the state of or move to the order object. The direction arrow also tells us that the order object cannot send a purchase order to the customer object. The crow’s-foot cardinality shows us that a customer object must have at least one order to create a relationship with the order object. After an order is processed, it is prepared for shipment. Notice that each order has one related shipment object; however multiple warehouse items can be part of a shipment. The objects depicted above can also represent classes suggesting that they are comprised of many component objects. These component objects might in turn be further decomposed into other primitive objects. This is consistent with the concept of the logical equivalent and with functional decomposition (see Figure 11.9).

It is important that the analyst specify whether classes or objects are depicted in the modeling diagrams. It is not advisable to mix classes and objects at the same level. Obviously the class levels can be effective for user verification, but objects will be inevitably required for final analysis and engineering.

clip_image018

Figure 11.8 An object/class diagram.

clip_image020

Figure 11.9 The component objects of the Warehouse class.

Relationship to Structured Analysis

Many analysts make the assumption that the structured tools discussed in Chapter 4 are not required in OO analysis. This simply is not true, as we have shown in the previous examples. To further emphasize the need to continue using structured techniques, we need to understand the underlying benefit of the OO paradigm and how structured tools are necessary to map to the creation of objects and classes. It is easy to say: “find all the objects in the essential components”; actually to have a process to do so is another story. Before providing an approach to determine objects, let us first understand the problem.

Application Coupling

Coupling can be defined as the measurement of an application’s dependency on another. Simply put, does a change in an application program necessitate a change to another application program? Many known system malfunctions have resulted from highly coupled systems. The problem, as you might have anticipated, relates back to the analysis function, where decisions could be made as to what services should be joined to form one single application program. Coupling is never something that we want to do, but no system can be made up of just one program. Therefore, coupling is a reality and one that analysts must focus on. Let us elaborate on the coupling problem through the example depicted in Figure 11.10.

The two programs A and B are coupled via the passing of the variable Y. Y is subsequently used in B to calculate R. Should the variable Y change in A, it will not necessitate a change in B. This is considered good coupling. However, let us now examine X. We see that X is defined in both A and B. Although the value of X does not cause a problem in the current versions of A and B, a subsequent change of X will cause a programmer to remember to change the value in B.

clip_image022

Figure 11.10 Application coupling.

This is a maintenance nightmare. In large enterprise level systems, analysts and programmers cannot “remember” where all of these couples have occurred, especially when the original developers are no longer with the organization. The solution to this problem is also to pass X from program A (see Figure 11.11).

We now see that both X and Y are passed and programs A and B are said to have low coupling. In addition, program A is said to be more cohesive.

Application Cohesion

Cohesion is the measurement of how independent a program is on its own processing. That is, a cohesive program contains all of the necessary data and logic to complete its applications without being directly affected by another program; a change in another program should not require a change to a cohesive one. Furthermore, a cohesive program should not cause a change to be made in another program. Therefore, cohesive programs are independent programs that react to messages to determine what they need to do; however they remain self- contained. When program A also passed X it became more cohesive because a change in X no longer required a change to be made to another program. In addition B is more cohesive because it gets the change of X automatically from A. Systems that are designed more cohesively are said to be more maintainable. Their codes can also be reused or retrofitted into other applications as components

clip_image024

Figure 11.11 Application coupling using variables X and Y.

clip_image026

Figure 11.12 Coupling and cohesion relationships.

because they are wholly independent. A cohesive program can be compared to an interchangeable standard part of a car. For example, if a car requires a standard 14-inch tire, typically any tire that meets the specification can be used. The tire, therefore, is not married to the particular car, but rather is a cohesive component for many cars.

Cohesion is in many ways is the opposite of coupling. The higher the cohesion, the lower the coupling. Analysts must understand that an extreme of either cohesion or coupling cannot exist. This is shown in the graph in Figure 11.12.

The graph shows that we can never reach 100 % cohesion; that would mean there is only one program in the entire system, a situation that is unlikely. However, it is possible to have a system where a 75 % cohesion ratio is obtained.

We now need to relate this discussion to OO. Obviously OO is based very much on the concept of cohesion. Objects are independent reusable modules that control their own attributes and services. Object coupling is based entirely on message processing via inheritance or collaboration.26 Therefore, once an object is identified, the analyst must define all of its processes in a cohesive manner. Once the cohesive processes are defined, the required attributes of the object are then added to the object. Figure 11.13 contains a table showing how processes can be combined to create the best cohesion:

clip_image028

Figure 11.13 Methods of selecting cohesive objects.

The tiers above are based on best to worst, where the by function method is the most desirable and by lines of code method the least desirable. Tiers 1 and 2 will render the best object cohesiveness. This can be seen with the example in Figure 11.14.

Figure 11.14 depicts a four-screen system that includes four objects, that is, each screen is a separate object. The Transaction Processing object has been designed using tier 2, By Same Data since it deals only with the Transaction File. The object is cohesive because it does not depend on or affect another module in its processing. It provides all of the methods required for transaction data.

The Financials object is an example of tier 1, the by function method since a Balance Sheet is dependent on the Income Statement and the Income Statement is dependent on the Trial Balance. The object therefore is self-contained within all the functions necessary to produce financial information (in this example).

The System Editor, on the other hand, being an example of tier 3, shows that it handles all of the editing (verification of the quality of data) for the system. Although there appears to be some benefit to having similar code in one object, we can see that it affects many different components. It is therefore considered a highly coupled object and not necessarily the easiest to maintain.

We can conclude that tiers 1 and 2 provide analysts with the most attractive way for determining an object’s attributes and services. Tiers 3 and 4, although practiced, do not provide any real benefits in OO and should be avoided as much as possible. The question now is what technique do we follow to start providing the services and attributes necessary when developing logical objects?

The structured tools discussed in Chapter 5 provide us with the essential capabilities to work with OO analysis and design. The STD can be used to determine the initial objects and the conditions of how one object couples or relates to another. Once the STD is prepared it can be matured into the object model discussed earlier in this chapter. The object model can be decomposed to its lowest level; the attributes and services of each object must then be defined. All of the DFD functional primitives can now be mapped to their respective objects as services within their methods. It is also a way of determining whether an object is missing (should there be a DFD that does not have a related object).

clip_image030

Figure 11.14 Applications with varying types of object cohesion.

clip_image032

Figure 11.15 The relationships between an object and the ERD and DFD.

The analyst should try to combine each DFD using the tier 1, by function approach. This can sometimes be very difficult depending on the size of the system. If the tier 1 approach is too difficult, the analyst should try tier 2 by combining DFDs based on their similar data stores. This is a very effective approach; since tier 1 implies tier 2,27 it is a very productive way to determine how processes should be mapped to their appropriate objects. This does not suggest that the analyst should not try tier 1 first.

The next activity is to determine the object’s attributes or data elements. The ERD serves as the link between an attribute in an object and its actual storage in a database. It is important to note that the attribute setting in an object may have no resemblance to its setting in the logical and physical data entity. The data entity is focused on the efficient storage of the elements and its integrity, whereas the attribute data in an object is based on its cohesiveness with the object’s services.

The mapping of the object to the DFD and ERD can be best shown graphically, as in Figure 11.15.

Thus, the functional primitive DFDs and the ERD resulting from the normalization process provide the vehicles for providing an object’s attributes and services.

Object-Oriented Databases

There is a movement in the industry to replace the traditional relational database management systems (RDBMS) with the object-oriented database management system (OODBMS). Object databases differ greatly from the relational model in that the object’s attributes and services are stored together. Therefore, the concept of columns and rows of normalized data becomes extinct. The proponents of OODBMS see a major advantage in that object databases could also keep graphical and multimedia information about the object, something that relational databases cannot do. The answer will come in time, but it is expected that the relational model will continue to be used for some time. However, most RDBMS products will become more object-oriented. This means they will use the relational engine but employ more OO capabilities, that is, build a relational hybrid model. In either case, analysts should continue to focus on the logical aspects of capturing the requirements. Changes in the OO methodologies are expected to continue over the next few years.

Problems and Exercises

1. What is an object?

2. Describe the relationship between a method and a service.

3. What is a class?

4. How does the object paradigm change the approach of the analyst?

5. Describe the two types of objects and provide examples of each type.

6. What are essential functions?

7. What is an object type and how is it used to develop specific types of classes?

8. What is meant by object and class inheritance?

9. How does inheritance relate to the concept of polymorphism?

10. What are the association differences between an ERD and an object diagram?

11. How does functional decomposition operate with respect to classes and objects?

12. What is coupling and cohesion? What is their relationship with each other?

13. How does the concept of cohesion relate the structured approach to the object model?

14. What four methods can be used to design a cohesive object?

15. What are object databases?

Comments

Popular posts from this blog

WORKED EXAMPLES ON PROCESS SPECIFICATION.

Why do we need information systems, management structure, requirements of information at different levels of management.

The User Interface:Establishing User Interfaces