Which model is a software development process which can be presumed to be the extension of the waterfall model?

Documentation for Software and IS Development

Thomas T. Barker, in Encyclopedia of Information Systems, 2003

III.A. Waterfall Method and its Documentation

The waterfall method of software documentation consists of a series of stages called “phases” of the development life cycle. The life cycle of the product includes the stages in Fig. 1. The waterfall method derives its name from the stair-step fashion by which development events proceed from one stage to another. This development method assumes that all or most of the important information about user requirements is available to the development team at the beginning of the project. The development team (usually cross-functional) then follows the stages from idea to implementation, with each stage building on the next and not really going back. The process assumes a sign-off from one phase to the next as each phase adds value to the developing product. Because of the phase-by-phase structure in this model, it is often used best to develop complex products requiring detailed specifications and efficient team communication. The degree of communication required to make this model work makes it difficult to handle in anything but a mature organization where resources, processes, and communication behaviors and protocols are well established.

Figure 1. The waterfall method.

In the waterfall method of development a heavy emphasis falls on the product specification document. Seen as a blueprint for the entire project, the specification document needs to communicate with all the development team members (programmers, quality control persons, writers, sponsors, clients, managers, supervisors, and process control representatives). In the best of projects the product specifications document gets updated regularly to maintain its function as the central, directing script of the development. More commonly, however, the specification document gets forgotten as the programmers default to what is known as the code and fix process. The code and fix process is extremely time consuming and inefficient as it follows a random pattern of reacting to bugs and problems instead of a coordinated, document-driven process. The communication overhead required by the code and fix process can soon wreck the schedule and consume the entire remaining project budget. Besides that drawback, often the market window for a product would close up before this time-consuming process resulted in marketable product.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B0122272404000472

Project Management for a Machine Learning Project

Peter Dabrowski, in Machine Learning and Data Science in the Oil and Gas Industry, 2021

7.1 Classical project management in oil & gas-a (short) primer

Projects in the oil and gas industry are some of the most complex and capital intensive in the world. Major projects, like installation of an offshore production rig, can cost upwards of US$ 500 million and make up a significant percentage of a company’s expenditures and risk exposure. Therefore, focusing on how to best manage processes, dependencies and uncertainties is imperative.

Traditionally, large projects in engineering design are run sequentially. This sequential approach is phased with milestones and clear deliverables, which are required before the next stage begins. These stages can vary, depending on needs and specifications, but typically all variations contain at least these five basic blocks (Fig. 7.1):

Figure 7.1. Example workflow of the Waterfall method.

With the workflow cascading toward completion, this approach is often referred to as the Waterfall method. This terminology was applied retroactively with the realization that projects could be run differently (e.g., Agile).

The Waterfall method is one of the oldest management approaches. It is used across many different industries and has clear advantages, including:

Clear structure-With a simple framework, the focus is on a limited number of well-defined steps. The basic structure makes it easy to manage, allowing for regular reviews with specific deliverable checks at each phase.

Focus-The team’s attention is usually on one phase of the project at a time. It remains there until the phase is complete.

Early well-defined end goal-Determining the desired result early in the process is a typical Waterfall feature. Even though the entire process is divided into smaller steps, the focus on the end goal remains the highest priority. With this focus comes the effort to eliminate all risk that deviates from this goal.

Although the Waterfall method is one of the most widely used and respected methodologies, it has come under some criticism as of late. Depending on the size, type, complexity, and amount of uncertainties in your project, it might not be the right fit. Disadvantages of the Waterfall method include:

Difficult change management-The structure that gives it clarity and simplicity also leads to rigidity. As the scope is defined at the very beginning of the process under very rigorous assumptions, unexpected modifications will not be easy to implement and often come with expensive cost implications.

Excludes client or end user feedback-Waterfall is a methodology that focuses on optimization of internal processes. If your project is in an industry that heavily relies on customer feedback, this methodology will not be a good fit, as it does not provision these kinds of feedback loops.

Late testing-Verification and testing of the product comes toward the end of a project in the Waterfall framework. This can be risky for projects and have implications on requirements, design, or implementation. Toward a project’s end, large modifications and revisions would often result in cost and schedule overruns.

Given its pros and cons and its overall rigid framework, the Waterfall method seems an undesirable management approach for machine learning projects. However, with so many management approaches from which to choose (e.g., Lean, Agile, Kanban, Scrum, Six Sigma, PRINCE2, etc.), how do we know which is best for machine learning projects?

One way to begin to answer this question is to categorize projects based on their complexity. Ralph Douglas Stacey developed the Stacey Matrix to visualize the factors that contribute to a project’s complexity in order to select the most suitable approach based on project characteristics (Fig. 7.2).

Figure 7.2. Stacey Matrix with zones of complexity.

On the y-axis, it measures how close or far members of your team are from agreement on the requirements and objectives of the project. Your team members might have different views on the goals of the project and the needed management style to get there. Your company’s governance will influence the level of agreement as well.

Your project’s level of certainty depends on the team’s understanding of the cause and effect relationships of the underlying technology. A project is close-to-certain if you can draw on plenty of experience, and you have gone through the planned processes multiple times. Uncertain projects are typically challenged by delivering something that is new and innovative. Under these circumstances experience, will be of little help.

Based on these dimensions, we can identify five different areas:

1.

Close to agreement/close to certainty-In this zone, we gain information from the past. Based on experience, it is easy to make predictions and outline detailed plans and schedules. Progress is measured and controlled using these detailed plans. Typically, we manage these types of projects using the Waterfall approach.

2.

Far from agreement/close to certainty-These projects usually have high certainty around what type of objectives and requirements can be delivered, but less agreement about which objectives are of greatest value. In situations where various stakeholders have different views on added value, the project manager typically has difficulties developing a business case because of the underlying conflicts of interest. Under these circumstances, negotiation skills are particularly important and decision-making is often more political than technical. In these instances, the favored management approach is Waterfall or Agile.

3.

Close to agreement/far from certainty-Projects with near consensus on the desired goals, but high uncertainty around the underlying technologies to achieve those goals fall into this category. The cause and effect linkages are unclear, and assumptions are often being made about the best way forward. The driver is a shared vision by stakeholders that everyone heads toward without specific, agreed upon plans. Typically, Agile is the approach chosen for these types of projects.

4.

The zone of complexity-The low level of agreement and certainty make projects in this zone complex management problems. Traditional management approaches will have difficulties adapting, as they often trigger poor decision-making unless there is sufficient room for high levels of creativity, innovation, and freedom from past constraints to create new solutions. With adaptability and agility being the key, Scrum and Agile are useful approaches here.

5.

Far from agreement/far from certainty-With little certainty and little agreement, we find the area of chaos. The boundary to the complex zone is often referred to as the “Edge of Chaos.” Traditional methods of planning, visioning, and negotiating often do not work in this area and result in avoidance. Strategies applied to address these situations are called Kanban or Design Thinking.

Simplifying these different projects down to the degree of available knowledge and characteristics and the responsibilities of a leader highlights the inherent differences.

In Table 7.1, we see how important it is to choose the right process for each project. Although these categories are highly dependent on environment and the team’s capabilities (a project that is complicated for an expert can be complex for a beginner), most oil and gas projects are typically categorized as complicated. They are characterized by best practices and focus on efficiency. Execution works fantastically well with top-down management and clean lines of authority for command and control. In these circumstances, the Waterfall method is the best management option.

Table 7.1. Complexity in relation to management style according to the Cynefin framework.

EnvironmentCharacteristicsLeader's job
Chaotic
(little is known)
High turbulence
No clear cause-effect
True ambiguity
Action to re-establish order
Prioritize and select work
Work pragmatically rather than to perfection, act, sense, respond
Complex
(more is unknown than known)
Emerging patterns
Cause-effect clear in hindsight
Many competing ideas
Create bounded environment for action
Increase level of communication
Servant leadership
Generate ideas, probe, sense, respond
Complicated
(more is known than unknown)
Experts domain
Discoverable cause-effect
Processes, standards, manuals
Utilize experts for insights
Use metrics to gain control
Sense, analyze, respond
Simple
(everything is known)
Repeating patterns
Clear cause-effect
Processes, standards, manuals
Use best practices
Establish patterns and optimize
Command and control, sense, categorize, respond

What about machine learning projects? Application of machine learning and artificial intelligence to modern day problems is an innovative process. When compared with other industries, machine learning in the oil and gas industry has only recently found its application. Only governments lag farther behind oil and gas even further when comparing industry adoption of digitalization technologies [Source: World Economic Forum].

Machine learning is most effective when applied to complex problems. As outlined earlier, these are projects with many variables and emerging, interdependent interactions. With the interplay between these variables and dependencies being too complicated to predict, upfront planning is useless

As previously stated, Scrum is the most often used management approach to tackle complex projects. Soon, we will dive into the details of applying Scrum to projects, but before doing so, let us highlight the pitfall for managers, if this premise is not well understood.

The danger comes in the form of established processes and habits. As mentioned, the majority of E&P projects are being managed through the Waterfall method. However, a leader tasked with managing a complex machine learning project and using only familiar tools from simple or complicated projects is a recipe for conflict, failure, and misunderstanding.

From Table 7.2, we can see how the characteristics of a complex project, with uncertainty in the process and creative approaches, entails too many competing ideas that rely on the respective skills and competencies of the leader.

Table 7.2. The importance of matching the right management style with the respective project type.

Source: Adapted from Scrum.org/PSM.

For a complex project, a good approach is to rely less on experienced professionals in the specific technical field, but rather, collect various theories and ideas and observe the effect of choices by using an Agile approach. (You can, of course, heavily rely on team members with experience with Agile projects). The project team must identify, understand, and mitigate risk as new results emerge. This often happens at a rapid pace, requiring a good leader to be an integral team player by enabling the rest of the team and driving cooperation and open communication. This type of leadership is referred to as “servant leadership.” In order to arrive at productive solutions for complex projects, teams must approach a problem holistically through probing, sensing and responding, as opposed to trying to control the situation by insisting on a plan of action (Fig. 7.1).

The potential mismatch between organizational requirements of a successfully managed, complex project and what a typical Waterfall environment provides as outlined in Fig. 7.2 is why we need a different project management approach to machine learning projects. In the next section, we explore the specifics of Agile and Scrum and learn how these approaches are best applied to the world of oil and gas.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128207147000078

What Is Agile Data Warehousing?

Ralph Hughes, in Agile Data Warehousing Project Management, 2013

Fail fast and fix quickly

Contrary to waterfall methods’ goal of eliminating project errors through comprehensive requirements and design, agile does not consider failure during software development as bad—as long as such failures are small in scope and occur early enough in the project to be corrected. This attitude has become embodied in a belief among agile practitioners that a strategy of “fail fast and fix quickly” minimizes greatly the risk faced by each project.

Few software professionals have ever witnessed a perfect project. Development teams are frequently surprised by unknown complexities and their own mistakes throughout the entire life cycle of a software application. If developers are going to encounter challenges in every aspect of a project, the agile community argues that teams are better structuring their off work so that these issues appear as early in a project as possible. With each new feature or system aspect they add to an application, agile teams want to detect failure before too much time has passed so that any rework required will not be too much for their streamlined methods to repair quickly.

The incremental nature of methods such as Scrum allows agile teams to uncover flaws in their process and deliverables early on while placing only a small portion of the application at risk. With the incremental approach, a small piece of the solution leads the team through the entire life cycle from requirements to promotion into production. For DWBI, this small slice of the solution will land in each successive layer of the data architecture from staging to dashboard. Misunderstandings and poor assumptions concerning any stage of engineering or architectural stratum will impede completion of the next incremental release in a painfully clear way. The team may well fail to deliver anything of value to the business for an iteration or two. With such pain revealing the major project risks during early iterations, the team can tune up their process. For the most part, because early failures reveal the biggest misunderstandings and oversights, the benefit of fail fast and fix quickly provides large cost savings over the longest possible time. This long payback of early improvements leads some agile methods, such as Unified Processes, to deliberately steer teams into building out the riskiest portions of a project first. [Kroll & MacIsaac 2006] A project employing Scrum can adapt this strategy as well.

By iterating the construction increments, agile teams can steadily reveal the next most serious set of project unknowns or weaknesses remaining in their development process. As developers repeatedly discover and resolve the layers of challenges in their projects, they steadily reduce the remaining risk faced by the customer. By the time they approach the final deliverable, no impediments large enough to threaten the success of the project should remain. Because the fail fast and fix quickly strategy has led the team to uncover big errors in design or construction in the front of the project, the rework to correct the issues will have been kept as small as possible. The potential impact to the business will have been minimized. All told, the team’s early failures on small increments of features will guarantee the long-term success for the entire project.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123964632000016

Applying Agile Principles to BPM

Mark von Rosing, ... Asif Qumer Gill, in The Complete Business Process Handbook, 2015

Agile versus Traditional Ways of Working

Agile and traditional waterfall methods are two distinct ways of developing software. The Waterfall model can essentially be described as a linear model of product delivery. Like its name suggests, waterfall employs a sequential set of processes as subsequently indicated in Figure 2. Development flows sequentially from a start point to the conclusion, the delivery of a working product, with several different stages along the way, typically: requirements, high-level design, detailed implementation, verification, deployment, and customer validation, often followed with stages to cover the running/maintenance of the product, and to address the need for continuous improvement.

Figure 2. Agile versus traditional waterfall.

The emphasis of Waterfall is on the project plan and managing all work against the plan. For this reason, a clear plan and a clear vision should exist before beginning any kind of development. Because the Waterfall method requires upfront, extensive planning, it permits the launch of a known feature set, for an understood cost and timeline, which tends to please clients.

Furthermore, Waterfall development processes tend to be more secure because they are so plan oriented. For example, if a designer drops out of the project, it isn’t a huge problem, as the Waterfall method requires extensive planning and documentation. A new designer can easily take the old designer’s place, seamlessly following the development plan. As described above, Agile offers an incredibly flexible design model, promoting adaptive planning and evolutionary development. Agile might be described as freeform software design. Workers only work on small packages or modules at a time. Customer feedback occurs simultaneously with development, as does software testing and deployment. This has a number of advantages, especially in project environments in which development needs to be able to respond to changes in requirements rapidly and effectively.

By way of comparison, instead of a big-bang waterfall product delivery, Agile focuses on delivering early value or product features in small increments, which is referred to as a minimum viable product or as having minimum marketable features. An agile project is organized into small releases, in which each release has multiple iterations. Within each iteration just enough work is pulled off the stack, planned, analyzed, designed, developed, tested, integrated, and then deployed in the production or a production-like staging environment. During and following the iteration the product is demonstrated to concerned stakeholders for feedback and commitments. Each iteration also involves retrospective activity, which is aimed at identifying and addressing the issues of the agile practices. In each iteration, different developers may work on different modules or requirements (also known as user stories) throughout the development process and then work to integrate all of these modules together into a cohesive piece of working-software release. In summary, this can be seen as a process, which consists of analysis and planning stages, followed by a rapid design, build, and test cycle, all of which then ends with deployment.

Experience with the agile approach has shown that it can be especially beneficial in situations in which it is not possible to define and detail the project requirements, plan, and design upfront. Agile is also an excellent option for experimental circumstances. For example, if you are working with a client whose needs and goals are a bit hazy, it is probably worthwhile to employ the agile method. The client’s requirements will likely gradually clarify as the project progresses, and development can easily be adapted to meet these new, evolving requirements. Agile also facilitates interaction and communication—collaboration is more important here than doing design in isolation. Because interaction among different designers and stakeholders is key, it is especially conducive to teamwork-oriented environments.

Figure 2 compares and contrasts key elements of Agile and Waterfall Development. In this figure, we see graphically the life cycle of each development model. Below each type of life cycle are listed the key properties of each method and how they relate to the equivalent properties of the alternative method.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780127999593000276

Estimating and Segmenting Projects

Ralph Hughes, in Agile Data Warehousing Project Management, 2013

Summary

The infrequent attention that waterfall methods pay to estimating project labor leads regularly to grossly inaccurate project estimates that hurt both developers and stakeholders alike. In contrast, because agile methods build estimating into the fabric of every development iteration, it quickly makes developers capable of estimating accurately and maintains their skills. To bolster the accuracy of their predictions, an agile team estimates using two units of measures—story points and labor hours, cross-checking and revising both viewpoints until their predictions agree.

Agile also provides the notion of sized-base estimation using a technique called estimating poker that allows teams to assign story points to backlog stories quickly. Being fast and accurate, this forecasting technique enables a team to estimate the stories of both an iteration and an entire project, allowing agile teams to predict how many iterations a project will require. Because all aspects of the developer’s collaboration must go right for teams to regularly deliver upon all that they estimate they can build each iteration, tracking the accuracy of a team’s estimates can quickly reveal adverse changes in the collaboration patterns within a project room.

The ability to estimate a full project accurately enables agile teams to group the stories on a backlog into a series of intermediate releases. There are three techniques for planning this project segmentation: divisions drawn upon the application’s star schema, its tiered integration data model, and its categorized service model. Proper project segmentation may well include some deliberate rework, which—far from being the anathema that traditional software engineers may regard it—makes perfect business sense when it allows teams to deliver crucial analytical capabilities to end users early and often.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123964632000077

Being in the room but not present

Martina Hodges-Schell, ... Sarah B. Nelson, in Communicating the UX Vision, 2015

New software development processes, new collaboration models

Everyone in the software industry has felt the change of pace in recent years. The Internet and various app stores have revolutionized delivery, increased the amount of competition, and continue to demand new ways of sating customer desire. As software products embrace these new opportunities, they ramp up the speed of new feature delivery accordingly, meaning that, across the market, we all need to run faster to keep up. The old models of software development can no longer compete, giving way to newer ways of creating software that embrace iteration, using it to respond quickly to customer desire.

In contrast to the established waterfall method, a linear process comprising spec–design–code–test–release (itself proposed as a development anti-pattern in the 1970s!), newer methodologies like Agile, and Lean Startup, create highly collaborative teams that are able to react to a rapidly developing marketplace and changing needs much faster and more fluidly.

Waterfall, Agile, and Lean Startup

The waterfall model, where progress flows downward, arose in the 1970s as a way of translating existing physical product development methods to software. It consists of a series of sequential stages, beginning with conception, through to design, development, testing, release, and maintenance. There are usually sign-off “gates” between each stage so the business can be confident that its requirements have been met before proceeding to the next activity. Each stage outputs a specification that is taken up by the actors in the next stage for implementation.

While theoretically waterfall guarantees the software will be implemented “to spec,” it offers no guarantee that the people writing the specification from one stage will understand the needs or realities of the other stages, nor does it allow implementers the scope to suggest better solutions for issues they encounter farther down the line. It also does not deal well with changes to business reality or product vision during the development phase.

Agile arose in the early 2000s as a formalization of many existing developer-led responses to these inefficiencies with waterfall. A very broad range of methodologies sits under the “Agile” definition but, in general, what they share is a core of iterative processes, where software is “just-in-time” designed, developed, and released, by an integrated cross-discipline team that spans the whole business (including the client, if there is one) and is empowered to find its own ways to solve business needs. In Agile projects, there is supposed to be no defined end state – the job of the development team is to experiment and see what solutions provide the required market response. Rapid iteration means that the business can tweak the product as it goes, responding to changes in the business landscape far faster than waterfall. This model of iterative experimentation is known as “Build–Measure–Learn.”

As Agile was originally developer-led, it can be hard to cater for a holistic design or UX vision within an iterative framework that expects to receive business needs and output code within the next cycle.

Lean Startup takes the iterative build–measure–learn process from the core of Agile software development and applies it to the entire business model, aggressively prioritizing product features until a Minimum Viable Product (MVP) emerges that can be developed quickly and put to market to test the core idea of the business at low cost, low risk and without wasting time. Market feedback from the MVP is used to redirect business priorities for further iterative product development. Designers working in Lean Startups often find that they encounter many of the same challenges that Agile presents, often made even more acute by the intense iterative nature of the entire business.

There are assumptions and processes that we have always taken for granted in digital design and development. Agile and Lean, with their acknowledgement that we can’t predict the future, put an end to these. No longer can we specify the finished experience and expect development to proceed toward that goal after we hand off our deliverables. We can’t hand over a finished document, move on to the next thing, and wonder what happened to our idea when we see the finished product after launch, sometimes barely recognizable from our intended blueprint.

As designers, we need to recognize that the same forces that encouraged developers to create Agile project management processes are the same forces that should encourage us to adopt them: Unless you can predict the future, a waterfall approach tends to make products and services that aren’t as good as they could be. Our mantra is that our work is not an experience until users get to interact with it. After the discovery phase is over, changes to the business landscape, tested assumptions, and reactions to development milestones will change the needs of the product in ways that your documentation can’t possibly predict.

Figure 5.1. The Linear Waterfall Process.

(Image by: Martina Hodges-Schell.)

Figure 5.2. The Cyclical Iterative Process of Agile.

(Image by: Martina Hodges-Schell.)

Figure 5.3. The Iterative Build–Measure–Learn Loop of Lean Startup.

(Image by: Martina Hodges-Schell.)

The only way to maintain user-centricity when this happens is for you, as a UXer, to stay in the process. If you leave the response to someone else, then what you have designed is not the experience; it is just your deliverables. When we divide responsibility for aspects of the product among parties who do not communicate, we create a silo – a place into which requirements flow and deliverables emerge, stripped of their surrounding context. The reasoning behind design decisions can disappear, giving those left to implement our work no reason to prefer it to their own solutions.

In different environments, we may be even further removed from the project flow and decision-making. The authors have observed and worked in design agencies, UX consultancies, and client-side (from dot.com behemoth to day one at start-ups). Often, designers have to collaborate with offshore teams, third parties, external client organizations, and a whole host of internal teams. It is no surprise that it can be difficult to move your ideas beyond your notebook.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124201972000059

Data Organization Practices

Charles D. Tupper, in Data Architecture, 2011

Architectural Development Methods

The “waterfall” method was described by Royce in 1970. It is still the most widely used and most familiar method in the software development world. A template of this process is shown in Figure 9.1. It is called a “waterfall” for obvious reasons. All the effort of one stage needs to be completed before going on to the next stage. It is a fixed sequential process that has the work products of one level feeding as input to the next lower lever, much like water going down a stepped spillway.

Figure 9.1. The waterfall model.

From Royce, 1970.

A development project based on the waterfall method has the following characteristics:

1.

You must carefully and completely define the output work product before proceeding to the next stage.

2.

You must commit to a set of requirements that have been frozen at a fixed point in time.

3.

Deviation from the defined requirements or the succeeding design is an indication that you failed in the requirements-gathering process.

While this waterfall process has been helpful in promulgating the understanding of some of the techniques and concerns that we have had earlier in this chapter, it still has some shortcomings. Figure 9.1 shows the basic process steps and provides some sequencing information. The shortcomings are the following:

1.

It does not adequately allow response to changes in requirements; that is, there is no way to adjust for missed requirements or newly materialized requirements. There is simply no way to go up the waterfall.

2.

It assumes a uniform and orderly sequence of development stages. It assumes that the stages are predictable in length and each is required.

3.

It is rigid when it comes to rapid development techniques such as prototyping, in which some developed components might be kept and others thrown away.

4.

There is no real risk assessment done until late in the process.

Since the 1970s, developers and methodologists have been trying to address the inadequacies of the waterfall method. A solution that has worked with some degree of success is the “iterative waterfall” approach. The only difference between this approach and the traditional waterfall approach is that there are multiple iterations of analysis data gathering and design before going on to the next stage. Simply put, there are iterative data gathering/design presentation sessions, which are reviewed with the user before progressing on. It must be iterated until completion to ensure that all requirements have been gathered before moving on to the next stage. This altered approach has met with some success but still has some flaws. It has addressed the changing and materializing requirements but has not addressed the rigidity or the sequencing. All requirements still need to be completed before moving onward despite a staggered or layered approach, as shown in a primitive development diagram.

In 1988, B. W. Boehm developed the spiral development method shown in Figure 9.2. As one can see in its process, it addresses some of the problems associated with the waterfall method. Every stage of requirements analysis is accompanied/followed by a risk analysis phase. Also, the requirements go from simple (i.e., architectural) to more detailed as the spiral moves outward. It also is a better predictor of expense, since the further analysis is done, the more expensive it gets.

Figure 9.2. The spiral model of the software process.

From Boehm, 1988.

But none of these methods truly represents the real work flow. As Watts Humphrey (1989) Managing the software process pp. 249–251, said, “Unfortunately, the real world of software development doesn’t neatly conform to either of these models. While they represent the general work flow and provide overview understanding, they are not easily decomposed into progressively finer levels of detail that are needed to guide the work of the software professionals.”

Additionally there are many more architectural models, such as the Agile method, the V method, and even the double V method. All deal with how to best capture the requirements, interpret them, and implement them in the shortest period of time to give the users what they want.

The basic problem with architectural-level models or universal models, as they are called, is that they are, well, architectural. They are high-level flows that have been generalized to account for individual differences in detail processes. While this is a good method of communication and is necessary for the continued survival of the company, it is not what a software developer needs. To this point, it has been enough to speak about these as a common frame of reference. It does provide the understanding and communication basis for all involved. Unfortunately, the developer of the software referencing the data architectures needs something more specific.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123851260000097

Iterative Development in a Nutshell

Ralph Hughes, in Agile Data Warehousing Project Management, 2013

Three nested cycles

Because Scrum offers an alternative to traditional, “waterfall” methods, the reader may need a quick outline of a waterfall method for contrast. Figure 2.1 provides a good reference. The main line running diagonally from left to right is taken from the white paper that is widely regarded as having first defined the waterfall approach. [Royce 1970] As this process progresses from system requirements to program design and then to coding, Figure 2.1 depicts the stack of detailed specifications for the engineers and developers each step generates for the actors in the next step to fully read and understand. This accumulating mass of specifications is often the trigger that causes many software professionals to search for an alternative method such as Scrum, because, as the heft of the requirements and design documents grows, so do their doubts that anyone downstream will actually read and accurately apply all that is specified.

Figure 2.1. Typical waterfall method with key project management artifacts indicated.

Linking into a waterfall method’s chain of steps are a couple of key project management artifacts for which the reader should seek agile equivalents as we explore Scrum in the next few chapters. The first is the work breakdown structure (WBS), in which a project manager will gather all the tasks the development team must complete in order to deliver the envisioned application successfully. Software engineers usually provide the technical work items listed in a WBS, although unfortunately the overall process often takes so long that the engineers who forecasted the labor steps are not the developers required to follow them once coding begins. Once the WBS is drafted, the project manager can direct those developers to forecast the labor hours needed to accomplish each work item listed on the WBS, yielding a definitive estimate of the time and cost needed to complete the project. The goal of a definitive estimate is an accuracy of plus or minus 10%. [PMI 2008] With such accuracy, the traditional approach stipulates that a project manger should then convert the definitive estimate into a delivery schedule predicting the completion dates for key milestones of the project as well as the finished application. Given this chain of engineering events and these project management artifacts, waterfall practitioners consider the process of delivering software on time, on budget, and with all required features as a fundamentally linear and conceptually uncomplicated process: everything is specified, project managers need only to have team members follow the plan. Consequently, waterfall methods rely heavily on a project manager to understand the full list of tasks comprising the project and to assign those tasks to developers as required by the delivery schedule. With the project manager serving as a central dispatcher of work, traditional method advocates consider the waterfall approach scalable to hundreds of developers, even when distributed all over the globe.

In contrast, Scrum is an iterative approach for 6 to 10 colocated developers. It does not attempt to specify all the requirements, design, or development tasks of a project in advance. Instead, Scrum developers start with a list of important project features the application must have and then discover the project’s details incrementally as they repeatedly deliver the working code in small chunks. Instead of pursuing a single, linear effort to build the desired applications, Scrum teams typically follow a series of three nested cycles as depicted in Figure 2.2.

Figure 2.2. Three cycles of generic Scrum.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123964632000028

Methodology and Design

Tom Laszewski, Prakash Nauduri, in Migrating to the Cloud, 2012

Assessment

The assessment phase is akin to the requirements gathering phase of the Waterfall method for software development. Instead of gathering business requirements from a software development perspective, however, you collect information pertaining to project management (who will manage the project, and how), the potential cost of the migration, migration approaches, tools to use, and so on. In this phase, a detailed inventory of the application portfolio is created to assess the impact of database platform migration on the IT ecosystem, including other applications, integration services, reporting, and backup and recovery processes. For a thorough assessment, the following topics are typically of interest in this phase:

Drivers for migration (challenges, requirements) It is very important to understand the drivers behind a migration effort. For example, if the customer's licenses for legacy technologies are expiring soon, their need to migrate to a newer platform is urgent. The expiry of official support and the extension of support for an existing database platform may be very expensive for such customers, so they must migrate quickly, but they may not need to transform their applications, so they would prefer very few changes to their applications as a result of the migration.

Inventory of current environment Creating a detailed inventory of the current application portfolio really helps in terms of understanding the scope of a migration effort. This includes capturing information regarding the number of programs, scripts, and external interfaces involved. It also includes hardware and software configuration information, including operating system versions, database versions, features/functionalities in use, and similar information.

Migration tools/options It is not uncommon to test different migration tools and technologies to assess their efficiency and accuracy. A high level of automation along with accuracy in migration can result in less time spent in migration and testing. Many customers conduct small-scale proof-of-concept projects to try different migration options, such as emulation or wrapper technologies, which allow applications to communicate with a different database without requiring code changes in the application. This can reduce the overall migration effort in cases where the application is simple and does not have demanding performance requirements.

WARNING

Always choose a migration tool or vendor that has been proven, and does not claim to support migration of any programming language or database on the fly. In almost all cases, when vendors spend a significant amount of time enhancing their tools instead of actually performing migrations, some modification to the tools is essential to address fringe cases of programming language or database feature usage that cannot be automatically converted. Establish verifiable success criteria for these migration tools and/or vendors so that the chances of failure are reduced during migration project execution.

Migration service provider Businesses typically evaluate at least a couple of migration service providers if they do not have migration skills and staff in-house. In many cases, migration service providers utilize their own tools to perform detailed assessment and migration.

Migration effort estimate This is usually provided by the migration service provider or database vendor and is the most common information businesses request when considering a migration project. We discussed how to estimate a migration effort in detail in Chapter 2. As we noted, the estimate depends on factors such as database and application size, components, and database complexity factors, among others.

Training requirements Training requirements for existing IT staff on the new database platform need to be assessed to ensure that they can support the new environment effectively and can participate in the migration process if required. Therefore, it is important to identify appropriate training programs for the IT staff based on their roles in the organization. The most common training programs recommended for database administrators and developers who are new to Oracle database are:

Introduction to the Oracle Database

SQL, PL/SQL Application Development in Oracle

Oracle Database Administration

Oracle Database Performance Tuning

Knowledge transfer can also take place from the migration project team to the administration and development teams that will be responsible for maintaining the new system in the future.

IT resource requirement for the target database Requirements for deploying the new database environment also need to be assessed. This assessment should include critical database features and functions as well as additional software that may be required to support the migration process and maintain the migration after it has been deployed. These resources typically include hardware, storage, and Oracle software, including the Oracle database and Oracle SQL Developer, and optionally, Oracle GoldenGate. For virtualization in a cloud environment, Oracle VM software can also be used.

IT resource requirement for the migration project Resources such as the hardware and software required for performing migration tasks also need to be identified. Organizations may need to acquire new hardware and software to support the migration project, or they can provision these resources from a cloud service provider (Infrastructure as a Service [IaaS] and Platform as a Service [PaaS]).

Sufficient time needs to be allocated for this phase to have a complete and meaningful assessment of the migration process. It is not uncommon to see large IT organizations with tens of databases to migrate spending eight to 12 weeks performing a full assessment. When performing in-depth assessments to assist in migration projects, system integrators use an array of tools that capture exhaustive amounts of information from the source systems; this information helps them analyze the dependencies between an application's various components and the database, as well as the complexity of the migration effort. These tools analyze every application program and every line of code in these programs to paint a detailed picture. The following information helps a system integrator assess the impact of a database migration on an organization's applications:

Programs interacting directly with the database This helps the system integrator to identify the number of programs that may require changes to SQL statements or changes to database-specific APIs.

Programs or other applications that execute transactions directly This helps the system integrator to identify programs that may be impacted if there are any changes to transactional behavior in the target database (Oracle), such as:

Programs that have explicit transaction control statements in them (e.g., COMMIT/ROLLBACK). Typically, these programs maintain control over a transaction they initiate.

Programs that invoke a stored procedure to initiate a transaction, but have no control over the transaction. In this case, the stored procedure maintains control over a transaction.

Programs that issue explicit Data Manipulation Language (DML) statements (e.g., INSERT/UPDATE/DELETE), but do not control the full transaction. In many cases, a master program initiates these programs to execute database transactions and return results.

Programs or scripts that offload data from or load data into the source database These programs will need to eventually be modified to include changes such as use of Oracle database-specific utilities, associated commands, and parameters, as well as changes to any embedded SQL.

The number of management or database administration scripts Identifying such scripts helps the system integrator estimate the effort involved in migrating these scripts by either rewriting them or discarding them completely to use Oracle-specific tools such as Oracle Enterprise Manager (OEM) for routine administration and monitoring tasks.

The type and number of external interfaces All of these interfaces need to be further analyzed to estimate the migration effort.

It is best to capture as much information as possible in the assessment phase and to analyze it to thwart any technical challenges that may emerge during migration. This also has the benefit of building comprehensive documentation in the long run.

NOTE

The assessment phase tends to consist of an intense exercise during which crucial decisions affecting the migration project are made. Sometimes organizations spend months instead of weeks finalizing their choice of migration strategy, tools, and service provider. Many service providers offer in-depth assessment as a paid service for performing an inventory of the current environment and reporting on the dependencies among various applications, impact analysis, and feature/functionality usage. During the assessment phase, it is quite common for organizations to conduct pilot projects to prove that the choices that have been made will help the organization achieve its goal.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B978159749647600003X

Balancing Needs through Iterative Development

Elizabeth Goodman, ... Andrea Moed, in Observing the User Experience (Second Edition), 2012

Waterfall Development

Obviously, we favor iterative development processes. However, if your company uses a waterfall method, there’s no reason you can’t use the techniques outlined in this book. In a waterfall process, user research typically happens at the beginning and end—before the requirements are written, and then after some code is written. The first rounds of user research inform the requirements, and the last evaluate either a working prototype or a finished product.

Because so much depends on getting the requirements right, waterfall processes should include more early research activities (see Figure 3.7). One way to make the process a little more iterative is to schedule interviews or focus groups during the design stage to discuss design scenarios, concepts, or even paper prototypes. Since implementation has not begun, you may be able to get development teams to make changes based on feedback from engagement with potential users.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123848697000036

Which model is an extension to the waterfall process model?

By extending the op- erational life-cycle (denoted as the maintenance cycle) and then attaching it to the waterfall model, they devised the b-model as shown in Figure 2. This was done to ensure that constant im- provement of the software or system would become part of the development stages. ...

What are the 5 stages of the waterfall model?

But generally, you can group the activities of the waterfall approach into five stages: planning, design, implementation, verification, and maintenance..
Requirements and Planning. ... .
Design. ... .
Implementation. ... .
Verification/Testing. ... .
Maintenance..

Why waterfall model is used in software development?

Waterfall Model is a sequential model that divides software development into pre-defined phases. Each phase must be completed before the next phase can begin with no overlap between the phases. Each phase is designed for performing specific activity during the SDLC phase. It was introduced in 1970 by Winston Royce.

What is SDLC waterfall?

The waterfall model is a linear, sequential approach to the software development life cycle (SDLC) that is popular in software engineering and product development. The waterfall model emphasizes a logical progression of steps.

Toplist

Neuester Beitrag

Stichworte