Building a Better Application
Life Cycle
Better methodologies, modeling, and more sharing of data can make a difference in building better software faster
by Peter Varhol

Posted June 14, 2004

We all know what the application life cycle is. Based on a business need codified as a set of requirements, a development team designs and builds an application that purports to meet those requirements. That proposition is evaluated by a set of testers, who match reality to the requirements and determine where they differ. If they differ too much, the application goes back to the developers, and the process repeats again.

Once the testers give the application the green light, it's turned over to the operations staff, which performs a more real-life testing in a staging area and then gradually moves it into real production use. Once users begin using the application in the performance of their jobs, the need for enhancements become apparent, and the process starts all over again.

But today this application life cycle is undergoing a dramatic change, brought on by a combination of business and technology drivers. On the business side, the rapid pace of change has necessitated more rapid application development and deployment, along with the creation of more sophisticated applications that process data and make decisions in unique ways.

The application life cycle as it exists today can hardly be called an unbridled success. A hefty 30 percent of all projects fail to achieve their desired goals, while as many as 70 percent are late or over budget, according to a Gartner study. While it may not be clear what is broken, it is clear that the process by which we build software needs improvement.

Technology has responded to these business needs and added some twists of its own. Application teams are experimenting with new methodologies for delivering more relevant software faster, while new tools both streamline existing processes—and create new ones—in the pursuit of faster deployment and higher reliability.

Perhaps most important, it's no longer enough simply to get an application into production. That application has to meet well-defined service-level agreements (SLAs), and any problems have to be diagnosed and resolved faster than ever. One key is to make sure that the application is prepared to meet those SLAs, but a premium is also placed on being able to diagnose the application in production, rather than attempting to replicate the problem in the development lab.

Before continuing, I'll offer my usual disclaimer that I toil by day for Compuware Corporation, an application life cycle software vendor. I won't promote its positions or products at the expense of others.

Changing the Road Map
Traditionally, applications were developed and deployed according to well-known and practiced processes. Chief among them was the waterfall model, a highly structured approach that all of us have undoubtedly encountered at some point. The waterfall model, originally pioneered by IBM and codified by practitioners in the 1970s, is credited with bringing order and discipline to the complex and often-chaotic process of building and deploying software.

Today the waterfall model is largely considered slow, unwieldy, and unable to change gears to respond to rapidly changing business conditions. While many project groups still use it with success, others are experimenting with approaches that deliver better applications more quickly.

Perhaps the best known of the modern generation of new development methodologies was Kent Beck's Extreme Programming (XP) (see Resources). XP is considered a lightweight process in that it places a premium on the production of code rather than documentation of the process and results. Meetings and discussions tend to be short and focused on discrete deliverables that can be accomplished within a few days.

Testing plays a big role in XP; tests are written before writing code, so when all of the tests are passed, the coding is complete. XP is especially famous (or infamous, depending on who you talk to) for the concept of paired programming, which has two developers working closely together on the same code, usually on a single computer. This enables one programmer to concentrate on the tactical issues of coding while the other considers more strategic issues of algorithm design and program structure.

The XP approach is most appropriate when an application is poorly defined and requirements may changes, and it's possible to get extensive user participation during all parts of the life cycle. Without almost continuous user participation, you can't have the short cycles and feedback needed to support this methodology.

An offshoot of XP is test-driven development (TDD), which incorporates the testing strategies of XP while not adhering to all aspects of the methodology. TDD requires that tests are written before coding the application and run often during development. Failing a particular test indicates a flaw in the application. More tests can be written during development, but none can be taken away. Once again, when an application passes all of its tests, it is done.

XP and other similar approaches fall under the umbrella of Agile Development, a philosophy that emphasizes code production, rapid implementation, and extensive user involvement (see Resources). The goal of the Agile Development movement is to produce working applications, treating documentation and adherence to specific process tasks as secondary.

But heavyweight or lightweight, methodologies are only as good as their participants and the culture surrounding the project team. Teamwork is a critical part of any software project, and understanding and applying team and interpersonal dynamics effectively can make many possible methodologies successful. The methodology guides the development, but the talent and the chemistry among the team members can make a difference between success and something less.

Bridging the Gaps
Probably the biggest disadvantage of any methodology is the difficulty of unambiguously communicating the results of one set of participants to the next in the process. For example, written requirements may be understood perfectly by the business analysts or users who write them, but they are frequently ambiguous or misunderstood by those responsible for responding with a technical solution. Likewise, problems found by QA testers are often marked as "not reproducible" by the responsible developers. In some cases, those responsible for one phase of the process may not even know what they want until they see the results of the next.

XP and some of the other methodologies try to address this issue by specifying more frequent communications between participants in the process and by delivering intermediate results more quickly. The ambiguities of communication are corrected by many small reviews and adjustments throughout the development and testing process. In efforts where it's possible to have close coordination, this approach can make the handoffs less ambiguous.

But many projects can't use many of the XP precepts by nature or by culture. Commercial software efforts can rarely engage the user community so closely, and even to participate in internal projects users must be relieved from some duties. In some groundbreaking software projects user feedback can inhibit innovation, and often the development team is ill prepared to translate criticism into results.

Fortunately, there are other ways of improving communication and making the transition between phases of the application life cycle more seamless. Formal models are another way to communicate concepts and needs unambiguously across the life cycle. Developers have used diagrams for decades to succinctly model software and system components and communicate design and implementation concepts. Pictographs such as flow charts, dataflow diagrams, and state transition diagrams have been an imperative way of both visualizing a design and making that design plain to others.

Today, technology and effort have made modeling a more exact science than ever. These diagrams are formal in that they are both unambiguous and provable. In other words, while they may vary in the coded implementation, they result in a single definition of logic and data flow.

The best-known modeling technique today is the Unified Modeling Language (UML), so-called because it brings together all possible models of software and system function and behavior. In describing such roles as activities, states, and sequences, you can define fully a software system. UML was devised largely by Rational Software (now a part of IBM), and was turned over to the Object Management Group for further development and standardization (see Resources). It became the basis of products such as Rose from Rational, I-Logix Rhapsody, and Embarcadero Describe
(see Figure 1).

The successor to UML is the Model-Driven Architecture (MDA), an advance that logically separates the model from its implementation details, which makes it possible to build an application model without any inherent knowledge of the underlying implementation. The set of diagrams that define the application without reference to the implementation is known as the Platform-Independent Model (PIM). Once the application is modeled in this fashion, it can be transformed to a Platform-Specific Model (PSM). The PSM can then be used to generate source code that works on that platform.

Equating Models and Apps
Ideally, the transformations are automated, but the state of the art cannot yet achieve full automation. Vendors and industry groups are attempting to assist in this process by defining and promoting standard transformation models for specific industries and by attempting to automate the conversion of models between the stages.

Several vendors have moved forward from UML to MDA, offering products that automate the conversion of formal models to running applications. These include Artisan Software's Real-Time Studio, Compuware OptimalJ, and Project Technology's BridgePoint Development Suite (see Figure 2).

The goal of these approaches is to equate the model with the application. By building a formally correct model that also meets the business and technical requirements, you could have the running application. While that was the goal, getting there is a long and complex process. For most types of realistic applications, it's still a work in progress.

Whatever form they take, formal modeling tends to have a high overhead, not necessarily in computing resources but in learning and understanding. UML itself has twelve diagram types, and while not all are needed for any single application, an experienced modeler would have to be conversant in all to work effectively on different projects. MDA adds an added layer of complexity in that you have to transform the PIM into the PSM, a process that has not yet been automated.

But the benefits of formal modeling are significant. By working at a higher level of abstraction, developers can concentrate on solving the business problem rather than writing and debugging code. And because the model is the application, there is little or no chance of the miscommunication that often occurs when a design is coded manually.

Modeling also affects the skills mix within a development organization. Higher levels of skill are needed to design the overall architecture of complex, multicomponent, distributed applications, and the concept of the enterprise architect was born. Following the building of an architecture, the modeling advocates believe, the task of building models and applying transformations to generate applications can fall to professionals skilled in solving business problems rather than coding complex programming algorithms. Whatever other implications this concept has, it improves communications across the life cycle by bringing the problem and solution closer together.

Therefore, modeling isn't just a design-and-development approach. Good models communicate information unambiguously to developers, testers, and operations analysts. They make an application more maintainable over its useful life by providing maintenance programmers with the framework needed to add new features, and fix old ones, after the fact.

Modeling seems to be on the evolutionary path away from a machine view of the world and toward a problem-solving paradigm. But that doesn't mean it's going to replace hand coding any time soon. A combination of education and training, more powerful tools, and a change in the application development culture are needed to see this transition happen.

Universal Life Cycle Platform
While issues of effective communication and data flow challenge all phases of the application life cycle, the risk is especially acute as an application is handed off from one group to another, a process that roughly corresponds to moving from one stage to the next. Models and new methodologies try to address this communications and information gap by making sure artifacts exist to represent status and intent.

There may be yet a better way of making sure that information is passed on unambiguously during the application life cycle. The Holy Grail of software development tools is to automate links between the different phases of the life cycle. The handoff between participants at different stages of the life cycle can be seamless, with no loss of application information or expertise.

Perhaps the best way to do this transfer is with a single platform that is useful to all participants in the life cycle. Today, the tools used by those participants tend to be designed for specific uses at specific times, with little forethought that other participants may want to make use of their data. Designers use complex drawing tools, while developer activity focuses on the IDE. Testers have test management and automation tools, and those prepping an application just prior to deployment have facilities for measuring application load and network traffic.

We've only scratched the surface of integrating these disparate tools. Defect tracking systems, for example, enable communication of application problems among the development team and among testers and developers. Similarly, requirements management tools provide the ability to let testers validate features against business needs. Products such as these foster communications across the life cycle.

For the vendors who have built or acquired many of the tools used by participants in the life cycle, getting these tools to work together remains a significant challenge. Some progress has been made in sharing data among individual tools, often using XML or Java Message Service (JMS), but even this method can be difficult to do if data isn't in the same format.

But there is no application life-cycle platform. There's no underlying method of communication for all activities and data that can be shared by all participants. Consider a situation where a QA tester identifies a feature bug. Nowadays, that requires providing a description of the bug in a defect tracking system, along with the steps needed to produce it. When this process works correctly, a developer can see the problem by repeating those steps, and then use the debugger to trace back the error to the offending code and attempt a fix.

All too often this methodology doesn't work correctly. Even if the description and how to reproduce the problem are accurate, they may not translate to a developer's computer setup. And the developer must still relate an imperfection in the feature to some part of the underlying code, a manual process that can take anywhere from a few minutes to a few weeks.

If the tester and developer were working from a single software tools platform, it's possible that the tester could find the feature bug and automatically send the developer a pointer to the code that was executed while the feature was executed. The relationship between the tester and developer becomes automated and seamless.

Getting There
But this relationship is possible only if the developer and tester use the same underlying platform and share the same artifacts. To pinpoint the code from the feature, the QA tester needs access to the source code and possibly the debugger and other IDE features, which is possible only if the IDE also provides the test management tools useful when running and analyzing functional tests.

This same story can be repeated from project conception and requirements analysis all the way through to production and maintenance. If there were better sharing of information, probably through a common tools platform, it's likely that applications could be developed more quickly and with higher reliability. But it doesn't seem possible to get from where we are now to where we need to be.

One effort, fostered by the Eclipse Foundation, may change that. Eclipse was originally conceived and built as a platform for creating IDEs—a developer platform. But it's turning into a universal life-cycle platform, capable of hosting tools used by different participants across the life cycle. As more vendors join Eclipse, the platform supports a wider variety of tools across development, testing, and beyond (see Figure 3).

While the Eclipse user interface may seem most familiar to developers, its ability to host any tool for any participant in the life cycle is what sets it apart. Separate Eclipse projects support tools integration in different phases of the application life cycle. For example, the Hyades project provides an open source platform for testing tools, and a range of open source reference implementations of ASQ tooling for testing, tracing, and monitoring software systems. Hyades provides a unified data model, a normative user experience and workflow, and a united set of APIs and reference tools that work consistently across the range of targets.

Building this type of platform is expensive and doesn't provide much of a return for any one vendor. But as it is available through an inexpensive Eclipse membership, building a full set of interactive life cycle tools on top of it may be worthwhile. When Eclipse was a part of IBM, other vendors and some users were more leery of adopting a solution promoted primarily by a single dominant company. Since then, however, Eclipse has become a separate legal and practical entity that is chartered as a not-for-profit company. There's a good chance that Eclipse will attract enough vendor interest to become just such a universal platform.

None of this means that Eclipse is the last word on the subject of the universal life-cycle platforms. Vendors such as Mercury Interactive, Quest, Borland, and Compuware all seek to integrate their disparate products across the application life cycle, and none should be counted out. What gives Eclipse the advantage is that it doesn't have a charter to make a profit on its efforts, which may make it the platform of choice for developing future solutions.

Those who participate in the application life cycle will be the primary beneficiaries of the successful integration of tools across the life cycle. Sharing data using the same underlying platform is neither a panacea nor a substitute for the hard work and difficult choices needed when designing, building, and testing applications. It does mean, however, that analysts, architects, developers, and others won't have to guess at each other's intentions and can solve problems by looking at them together, rather than through the lenses of their own tools.

Use It to Your Benefit
All of these approaches, whether they involve processes, tools, or platforms, have the promise of making your application life cycle more smooth and robust. But if you're looking for a magic bullet that will reduce development uncertainty and schedules, no methodology, modeling strategy, or universal application life-cycle platform will provide it. Software development, testing, and deployment are still inherently risky ventures, and projects will continue to fail because of any number of reasons.

But you can improve the odds. Having a process in place gives developers structure and a common framework for the team effort. By itself, it's not enough to succeed, but it's a start. Modeling will, at the very least, improve communications by removing ambiguities in requirements, design, and implementation. It also has the potential of raising the conceptual level of creating an application to the point where it is readily understandable by designers, developers, testers, and users. Last, the ability to easily share data such as test results, performance metrics, and defect details will make it easier to reduce ambiguities and eliminate confusion.

The application life cycle is really a continuum, and the phases we've defined are really artificial. There's no sharp break between design and development, or development and test. We've created those breaks because that's the way we work, and the way we've created our tools to serve us. But it causes problems that need to be overcome to improve the chances of building the right software on time and on budget. Improving the way our teams work together to build software is an important first step.

About the Author
Peter Varhol is a senior member of the technology staff for Compuware Corporation. Contact Peter at .