Recently in Project Management Category
This was serious because these are core systems running on top of the Agile Platform, and include our website, our community, our partner network, our product management tool, and a host of other internal applications.
Since responsiveness and availability are two fundamental traits of a great app, we had to do something about it. We decided to move our infrastructure to the cloud, in particular to Amazon EC2. And because these were core apps, we had to do it with minimum downtime and guaranteeing all the needed accesses.
Another part of the challenge was that these systems are used by internal OutSystems' applications, deployed in different infrastructures -- including business-critical integrations with the R&D process. This raised some security concerns and some challenges on how to keep supporting these integrations with minimal changes to the architecture.
Since the machines about to be replaced were near retirement, and because in Amazon we can add machines as needed, we decreased the number of servers from 5 front-ends to 3. We have an additional machine with SQL Server and, at the time of writing, all machines are m1.large (2CPU, 5 Amazon ECU, 7.5GB RAM). To distribute load between the Amazon front-ends, we are using Amazon Load Balancer.
In order to perform the move, we used an approach similar to the hot-standby approach suggested to our Agile Platform customers. The idea is to setup an infrastructure on the cloud, and use the disaster recovery option to move control to this new infrastructure. For this, we had to:
- Make sure all users were accessing the in-house systems from the outside network, just to make sure that everything that was needed was in fact available. This was good to identify certain content that needed to be accessed via HTTPS, for security reasons;
- We had to set up a database mirror for the SQL Server database. This was a big step, considering we moved 120Gb to Amazon -- a 2 day transfer! And since our internal infrastructure kept working, by the time it was done we had an extra gigabyte to move. Also, being two remote installations of SQL Server, we had to setup a secure mirror with certificate authentication -- which has its own tricks as well;
- We then installed the front-end servers in the cloud, and set them up to connected to the in-house database. This was easy, since OutSystems already provides Amazon images. The trickier bit was to ensure a secure connection with our internal systems, something we achieved using OpenVPN;
- Finally, we tested to make sure everything was working. We used some automated test scripts, and took the chance to identify dependencies with internal systems. To fix these dependency issues, we had to move some components back to the internal datacenter, and connect everything using webservices on top of HTTPS. Fortunately, this is really easy to do with the Agile Platform as it makes building webservices and delivering a service oriented architecture easy and fast! The only thing we couldn't test was performance... after all, the database was still in our datacenter, 3 thousand miles from the front-ends.
With all the preparation done, it was time to go full cloud. Here's what we did:
- We started by turning off the in-house systems. No problem here, just had to put an unavailable page and stop the OutSystems' Agile Platform and IIS.
- Next, we brought the cloud database server up. We were a bit worried with the time it would take to resync our SQL Server mirror after the rotation, but it only took about a minute. It took less than 5 minutes to have everything ready and to configure.
- After that, we reconfigured the Agile Platform to use the cloud database. No problems there, we just had to launch the configuration tool, modify the database settings, and restart the platform.
- Finally, we had to change the DNS servers to point to the new infrastructure location.
In the end, we definitely achieved our goal: Our core applications are running faster, and in time we'll be able to measure how connectivity improved since moving to the cloud.
It's also amazing how fast we were able to make the move. The procedure took 4 hours, but it could've easily taken 30 minutes if it wasn't for DNS. DNS ended up being the big bottleneck on the whole process, due to the time it takes to propagate. This is something we'll need to do differently, if we repeat this operation in the future.
In the end, the work was made a lot simpler thanks to the Agile Platform. Not only because it's totally prepared for these redundancy server scenarios, but also because it's so easy to change configurations and have everything running with the new settings in no-time.
Maybe even more important, the Agile Platform made it fast to split the components we wanted to keep in-house, and glue everything together with an SOA architecture. Being able to be extremely fast really opens a world of possibilities!
If you want to learn more about the cloud and how the Agile Platform can help you, click here to visit our cloud page.
"Even having this limited experience, the success of our first agile deployment instilled confidence that both the methodology and the technology would effectively support the project delivery."
"We actually issued the final release a month early, with an even higher than- promised level of functionality."
"Financially, we were able to realize significant savings in both capital expenditure and operating expense," Mr. Ornelas says. The final solution came in at only 15 percent of the total cost proposed by the vendor, and it was totally tailored to the organization.
While finding out how productive your teams are might seem scary, it is far better than not knowing. In this post I will provide three steps and some resources to help you start down the path to understanding the productivity of your application development teams. Once you have a means to measure and monitor, then you can iteratively improve your application development productivity.
1. Make the case for a metrics program
Measuring just for the sake of measuring isn't very useful. The real benefit of having metrics is gaining insight on what can be improved. Not only that, metrics also help you set objectives, what I like to call ultimate goals, and then focusing your team on achieving this goal.
Some of the best advice I have found is from the analysts at Forrester. This "Metrics for Application Development" article by Liz Barnett (UPDATE: Seems like the article is no longer available on the Forrester website. Here's an interview with the author on the subject), discusses why metrics for application development are important and gives some good advice on what you need to measure and why automating the measurement process is a good idea.
2. Measure your productivity
From my own research and experience, function points are one of the best mechanisms to measure application size and team productivity. As stated by the International Function Point User Group (IFPUG) using this metric for application development focuses measuring functionality delivered to the business, rather than how big the application is internally, how complex the code is, or what language was used to write the application.
Function points have been around for a long time and are a tried, proven and standardized approach. There is a wealth of resources on how to measure function points, and I personally liked this PDF from a SoftwareMetrics training course and this 5 step process for counting function points from devdaily.
3. Compare to a baseline
After measuring your productivity, you need to determine how your team performs compared with other teams: poor, average or better than average. There is good news - you can find benchmark information in numerous places. The one I like best is ISBSG. They provide function point metrics from across the IT industry and with their data you can compare your results and act accordingly.
Go for it!
So, what are you waiting for? Get a small team to spend some time coming up to speed on counting function points for your organization or go find a third party resource to count for you. The investment will be worth it when you can accurately assess your application team's productivity and make decisions based on this knowledge.
At OutSystems we got our customers bootstrapped on this process by measuring their performance with something we called the OutByNumbers project. You can take a look at the results from that study here and see how we automated function point counting. I have to say the results are pretty surprising!
There's a lot of blogging about agile estimating and planning but most, if not all, that I've come across only pertain to after the project has been started. Further, what is being estimated and planned is for the iterations. So how does one define budget for the agile project itself, specifically a web business application project? I mean, the cost of the hardware, 3rd party software, facilities, etc. everything except the actual software development cost can be fairly straight forward to budget.
But what about the software development cost? Using conventional methods or techniques is certainly an option but they do not account for the fact that agile projects tend to cost less when compared to similar projects. So, perhaps this is not the way to go. Besides, business owners want to know what they're getting for their money and how much it is going to cost them.
At OutSystems, we solved this by taking the following 3 steps:
- First, we work with the business to gather and transform high-level requirements into user stories. Stories highlight features that detail what the user wants to do (e.g., create, list, search) to specific objects (e.g.customer, store, loan).
- These features are patterns of implementation; it is the patterns that are estimated in hours based on years of experience delivering agile projects successfully and this experience is encapsulated in our OutSystems Agile Network Sizing (ANS) tool (BTW - you can see a demo video here). The tool allows us to set various influencing variables such as number of developers, iteration size, number of user profiles, etc. that enable us to adjust the estimates based on complexity levels. There are always unknowns during funding and these variables help provide a means to shore up those that may require more attention during the project.
- Given the resources, we are able to calculate what we call the ideal project timeline, complete with sprints and a target go-live date.
You can probably use story points to achieve the same thing assuming you have a common baseline and have equated points to some level of effort which, in turn, equates to dollars.
What if you get funded without this process?
Let's say your project gets funded but you did not use patterns or story points to determine your budget. You start the project and gather your user stories, prioritize them and the size them. You conduct your sprints and over the course of say 3 iterations, calculate your average velocity and capacity. Having your averages, you can calculate the average cost of each iteration and thus the project itself; also, potentially determining when you will run out of budget. Unfortunately, when using story points and a burndown chart, you still cannot predict with enough certainty when the project will be done. To most stakeholders, stating an approximation or a moving target date is just not acceptable even if you stress the benefits of going agile - early delivery, handling evolving requirements, software that meets specifications, early risk mitigation, etc.
So what do you tell them? How do you accurately estimate the time and cost it will take and then convince the stakeholders that the project will be done on time and within budget? Do you give odds when answering when the project will be done? What do you do if you run out of budget? Are you gambling with your budget?
Here's the process we've developed:
a) We bootstrap the stories, budget, timeline, sprint definitions from the sizing tool into our project management tool - the OutSystems Agile Network Project (ANP). The bootstrap process calculates the budget for each iteration based on the overall budget.
b) Each iteration is allocated a budget for development, technical debt, change requests, testing, and sprint review.
c) Once the information is in ANP, we review the stories with the business, prioritize them and even negotiate what goes in an iteration, just like any other agile project.
d) We also break down the user stories into work-items and estimate them (in hours) in conjunction with the developer(s) assigned to complete them. It is the work-items that we commit to deliver for the sprint. This helps us ensure that we do not exceed the iteration's budget.
In summary, estimating based on patterns enables us to define the project budget. Resource estimates and other influencing variables coupled with the budget enables us to provide a project timeline and target go-live date. This information allows us to manage the project at the sprint level ensuring that we are on target while not exceeding our budget.
What method(s) do you use for estimating project budget and timeline? What method(s) do you use to ensure you deliver within budget and on time?
The customer is a large Portuguese food distribution and consumer goods manufacturing company, with an international presence. With nearly 25,000 employees they are used to having huge IT projects, involving multiple departments with complex requirements, large budgets and long timelines.
Before agile, many of these projects were delivered late, over budget and in some cases actually failed. In 2005 they became an OutSystems customer; embracing the Agile Platform and an agile methodology. As they matured in their agile practices, the CIO saw the value and today they operate under the CIO's mandate that only projects scoped with a timeline of less than three months and a budget under 300K will be approved.
What happened before the CIO mandate was put in place? Here's what they told us:
1. They would spend two to three months in meetings, aligning all opinions in order to create a huge requirements document;
2. When they finally started developing the systems the business team had disappeared and in most cases forgotten about the project;
3. By the time the project was ready to be tested the key users had changed, the business had changed and the project delivery team immediately entered into a lengthy negotiation phase to reconcile what was delivered versus what the business really needed.
This customer states that with the agile methodology and their new CIO mandate they improved the success rate of their projects in several ways:
1. The business team is more motivated and involved as they are able to see how the projects are progressing on a regular basis;
2. The business gets to be more responsible for the decisions that shape the project direction because they see and constantly test the application;
3. The business and IT avoid the costly, wasteful exercise of building complex requirements documents because they now fully realize that they can never document every detail in a specification;
4. From very early on in the project, IT can see if the project is really what the company needs and identify any mis-matches quickly to reduce the amount of time, dollars and resources that might be wasted.
5. Even for big projects, Agile methodology is used - and forces the team to split the project into phases. This exercise divides the scope into smaller, manageable projects with incremental releases and decreased risk.
So, both for this CIO and many others, we are finding that an agile approach to application development really helps increase project success rate and reduce risks. Even in a company like this one that has complex application needs, lots of departments and bureaucracy - agile really works!
What are the technical leaders in your company doing to help drive agile project delivery? Or, what would you like to see them doing?
Share their rules, guidelines and mandates along with your ideas!
When I was first introduced to the game I wasn't quite sure what Agile concepts I would learn about. Was this about the importance of retrospectives; scrum meetings and communication or scrum of scrums? What I learned were some interesting aspects of all these topics plus, what I think was the real message from the exercise - with experience comes speed and quality, and experience can be injected into the team when done at the right time!
So for those who don't know this game it is very simple and quite fun. A good team building exercise even if you are not practicing Agile.
Watch this video now and my findings below will make more sense.
Mike's findings from the Ball Point Game
As you can see, the game was fun. If you did not notice the video was from two different sessions; one session was in Lisbon, Portugal and the other in the 'sunny' England. (I know, hard to believe it was sunny in the UK!) Let's reflect on the four topics I mentioned above:
1. Retrospectives are really key for the delivery team to be able to adjust and improve. They also give you a point in time to measure your effectiveness as a team. Important if you are interested in improving!
2. Scrum meetings and communication are part of the secret sauce for agile. It was very evident in the video. You saw the teams discuss their performance, brainstorm ways to improve and then implement the improvements. Of course this concept extends beyond the 'ball passers' to the whole Agile team including the business users, testers, etc. If you don't have regular interaction across the whole team the Agile process breaks down very quickly.
3. For large teams the concept of "Scrum of Scrums" is critical. Our first group was very large, there were four separate delivery teams and their Scrum Masters met to discuss the 'project' and then collaborated with the smaller teams. While I agree this is important, it begs the question - management in Agile projects? Of course if you read this blog regularly you already know that for Enterprise agile to be successful you need some good project management!
4. Experience is critical for success. If you go back and look at the video you might notice that before the last sprint the teams get a nudge from an experienced player. In this game's case the nudge is about 'maximizing' resources - and this meant passing two balls at once. This would never have been considered in the earlier sprints but with a little experience to direct the teams they easily adopted the concept and really improved their performance. A bit of a warning: a good agile coach will let their team build on their skills and not introduce a 'nudge' too early. I suspect that if we had given the team the "two hands" advice after the first sprint it might have proven less beneficial as the teams had not progressed enough in their working approach to successfully implement this advanced concept. Food for thought.
So, both groups improved their efficiency many times over from the first sprint. In addition, if you would have asked them if they thought they could double their throughput after each sprint they would have been skeptical at best.
And the final lesson: Keep an open mind, learn from experience, and be willing to fail - just do it fast and adjust!
Let me know if you have played the game and what your take-aways were.
You can follow me on Twitter here.
The question I want to pose is what are good criteria for assessing a successful Agile project? This question builds on Mike's recent post about criteria for measuring an Agile project manager's success - and we got lots of great responses and ideas in the comments.
The data points being used by the OutSystems team to evaluate whether projects are eligible for an award are:
1. Size & scope of project: the project should be of medium to large scale (over 40,000 software units in size.)
2. Project definition & objectives: the project should deliver significant and measurable business value.
3. Project approach: the project should have been run based on Agile practices, following an iterative development approach with regular end user involvement.
4. Project metrics: a baseline of project metrics must be submitted in order to measure the impact of using Agile to deliver the application.
The team then use these data points to assess whether an Agile approach was employed to deliver the project on time, on budget, with 100% user adoption and delivered true value to the business and IT.
Is this list a reasonable set of data points for measuring the success of an agile project?
BTW - Here are some examples of the results from the initial set of award winners (read more details here):
XDx - Analysis Request Management System
Time-to-market: 6 weeks + 1 for launch; Number of Agile sprints: 3
Customer quote: "This was our first Agile project and it achieved the two key business goals: avoiding tracking errors and improving real-time data consistency for our studies. Most importantly, we were able to deliver this value to the business in a record time, exceeding both developer's and user's expectations and establishing Agile as the preferred methodology for this type of development project." - Jochen Scheel, Director at XDx.
RWE - Tiger, Implemented by Atos Origin (BTW - nice blog from Atos Origin on the project & award here)
Time-to-market: 14 weeks; Number of Agile sprints: 5 + 1 tuning
Customer quote: "The Gas Portfolio Management application was implemented over a period of three months which was only possible with the OutSystems' Agile Platform and methodology. Their way of sharing information, processing activities and reviewing project deliverables with key users of RWE NL was instrumental to the success of this project."- Perry van de Goorberg, Project Manager at RWE.
OK! teleseguros - Sales Platform and Home Insurance, Implemented by Keep It Simple
Time-to-market: 13 weeks; Number of Agile sprints: 3
Customer quote: "This project was a true success as it exceeded the business's expectations in terms of objectives achieved and above all business benefits. The Agile Platform and methodology allowed the business to engage the development team and see the immediate impact and results of all project changes and decisions." - Sérgio Carvalho, Marketing and Product Director at OK! teleseguros.
So, off I went and did some research on PMBOK. What I learned is that the OutSystems Agile Methodology, while based on SCRUM, incorporates lots of extra management concepts that align with PMBOK. (BTW - If you are not familiar with the OutSystems Agile approach here's a white paper.) I will leave the convergence of Agile and PMBOK for a later discussion, but in my opinion, there are lots of PMBOK practices that are applicable to running Agile projects in Enterprise IT shops.
What I am interested in is the following - What is a good measure of an Agile project manager's success?
In the conversations on Agile and PMBOK I've had over the last couple of weeks I have come to the conclusion that the best measure for an Agile project manager's effectiveness is not based on on-time delivery, staying within budget, etc. But rather a measure of a new application's adoption by the business! Everyone I have talked to on the topic agreed with this notion of adoption but none have really offered a concrete technique for measuring adoption. In most of the discussions the notion of return on investment came up as a solution. However, in drilling into ROI we always came to the conclusion that while ROI is important it is not necessarily a good measure of your project manager's effectiveness.
So my quest for a good measure of an Agile project manger's success continues. Everyone agrees that application adoption could be it but I have yet to find anyone with a good definition on how to measure it. Your thoughts?
You should follow us on Twitter here.
At OutSystems, our roadmap is strictly a guideline because companies have different methods, cultures, and management approaches. When introducing Agile, here are the steps we generally go through. Concepts and activities that are emphasized will vary based on understanding of the customer culture, organizational structure (formal and informal), as well as their prior knowledge of Agile methodologies.
1. Education - first, level-set those involved in the roll out. We found that there are many different levels of understanding of what Agile is even if people say they "get it". Part of this process is to make them actually go through the process before the project begins. At OutSystems, we have "Agile-in-a-Day" training sessions that provide participants with a hands-on introduction to help develop real understanding of key concepts and activities. If you have a project already selected, involve the business sponsor, business manager, and key business users.
2. Project Definition / Selection - Once everyone is clear on the vision and direction, a project can get started. The project may have been identified earlier but how the project is chosen or what criteria was used needs to be understood. Since this is the first Agile project, we need to make sure that risks at this level are addressed to help ensure success for both project and process. The criteria for selecting the project needs to include the solution's level of complexity, visibility, resources, and integrations.
3. Execute Project - Now we start the project by educating the target users if they were not involved in the initial Agile-in-a-day. Go through the project kickoff; explain the methodology, roles, responsibilities, timeline, deliverables, etc.; fairly standard project stuff and along the way - expunge the word "scope" from all project documents, thoughts, and discussions. Work on the backlog, feature negotiations, the sprints, scrum meetings, demos, etc. Once a sprint is done, do a retrospective and refine the processes for the next sprint and the next project. Once the solution is in production, conduct a tuning sprint. This is a special sprint we do at OutSystems to ensure 100% adoption by implementing features that will boost adoption and conducting both solution performance & platform tuning in the production environment.
4. Perform Project Retrospective - apply lessons learned to subsequent projects and refine other processes. Note that this process improvement will involve Support organizations and dynamics between project teams. One of the things we often encounter is that Support organizations cannot move as fast as the project sprints and tend to delay Agile projects. Similarly, non-Agile projects have a difficult time addressing the integrations with Agile projects. As you execute your first project, you will find that you may need to bend or even break some rules to keep your project on track as defined by your timebox. Therefore at the end of the project, you will need to work with Support and other internal organizations to establish new protocols or processes specific to Agile projects.
5. Iterate Steps 1-4 - In our experience, we found it necessary to conduct multiple Agile-in-a-Day sessions to get everyone in the company level-set on the organization's approach to Agile. Agile is a mindset change; expect hurdles and naysayers. Besides, change is always difficult even if the participants are willing, able, and have executive sponsorship.
(Unofficial #6 - Be realistic and good luck)
What do you think of this roadmap? How have you introduced Agile into your companies?
With architecture being a core component of the OutSystems approach to Agile application development and the role of Delivery Manager, we thought it would be interesting to share this presentation and ask the OutSystems Community how you have embraced architecture while practicing Agile in an Enterprise setting.
For example, many of you are successfully delivering a high degree of reuse while focusing on delivering individual applications. While OutSystems' Agile Platform provides a solid foundation for refactoring code and adjusting architecture in progress, we suspect that there are many instances where your Corporate Architecture team needed to play a role in moving the application delivery forward.
So, if you are a Delivery Manager or Architect it is time to share your experiences of working together in an Agile application development project. How much education was required? Did you feel resistance? What were the lessons learned?
Personally, I couldn't agree more and I think he nails this key point about SCRUM. I suspect that most of you will be aligned with this too.
As an application development practitioner from the "old days" for me, iterative development and continuous business user involvement is the ultimate Agile truth - it's the best way to keep projects on schedule and deliver a high value solution.
But now for the real question about customer involvement: What happens when you don't have the right customers involved?
Last week I was discussing an Agile project with one of our OutSystems Engagement Managers and he described this exact issue. The team was very excited to tackle this project using Agile methodology as it was going to be the first for this particular client's business area. The business was on board, trained on Agile and they had the full support of the management who were committed to participating in planning user stories, each sprint demo, backlog settlement, etc.
The problem was that the supervising managers were not the ones who would use the application on a daily basis and were somewhat out of touch with the processes and issues on the ground. What happened? The management team met with resistance from the guys in the trenches about the initial app - and then what? ...they changed their minds about the functionality. Ultimately this proved to be a positive experience as the Agile and SCRUM-based approach they used caught the mismatch early and the resulting app was accepted. So, the lesson here : beware - management participation is critical but you also need the real end users to make sure you don't get too far off track!
Have you come across this type of issue on your Agile projects? How is Agile helping your projects deliver better software?
In particular, on the question of moving Corporate IT to Agile and how that means changing the traditional way that IT projects get approved and funded. The group felt that breaking the traditional cycle of detailed requirements documents, mandatory project deliverables and change requests would impact the entire project approval/funding and management processes - and yes, successful transition to an Agile model would require corporate IT to educate their business owners and stakeholders on the Agile approach. However, the workshop participants all agreed that making this change would definitely be a challenge.
One of the strategies we discussed was shifting the focus of the procurement process from detailed requirements to defining the amount of functionality to be delivered. We shared the OutSystems' approach of using user stories and patterns to set a high level scope of functionality to be delivered and made the case that function points could be used as the measure of functionality - since this would allow the delivery team the necessary freedom to adjust the requirements during the project while still meeting a target deliverable.
This seems to be a recurring theme that I'm hearing from IT teams who want to embrace Agile. If you have faced this challenge and succeeded (or failed) - what do you think of the above approach and how does your IT team get its Agile projects approved and funded?
Evergreen applications are business applications that continue to evolve as the business changes and so provide constant value, year after year. At OutSystems virtually any solution that is deployed using Agile methods is required to stay evergreen and subject to a continuous Agile Evolutionary Maintenance process. Part 1 of this article explored the main phases of the Agile Evolutionary Maintenance process and defined the three key project types that are most common.
In Part 2, I will shift my attention to defining the different types of backlog items that typically make up an evolutionary maintenance sprint. In addition, I will define some key technology challenges you must overcome in order to deliver on the promise of evergreen applications. Finally, I will describe how OutSystems can help in this process.
Backlog Items of Evolutionary Maintenance
The typical backlog items that are part of an evolutionary maintenance release include:
- Defects - fixes to application defects and business processes that are not correct for real life conditions, and therefore impact daily business operations;
- Application inefficiencies -
over time, applications composed of multiple services will experiences
changes in performance profiles due to changing business use and
shifting service load. It's key to identify and overcome performance
inefficiencies such as:
- Slow Web pages;
- Slow database access;
- Integration bottlenecks in distributed services architecture.
- Usability improvements - overcome annoyances that impact user efficiency or adoption;
- Change Management - Incrementally deliver new valuable features. Either selected from ones left out of previous releases or newly identified during the use of the live application.
Challenges of Implementing Agile Evolutionary Maintenance
To achieve an effective Evolutionary Maintenance process it is imperative that the maintenance team has the right tools. They need access to development and project management tools which:
- Allow them to assess application performance;
- Assist with capture and prioritization of change requests;
- Streamline the change, test and deployment process.
Without the proper application delivery and management environment your maintenance teams will not be able to deliver the agility needed to keep your applications evergreen. Let's explore these key challenges in a little more detail:
- Automated instrumentation of the application that doesn't require expensive operational support, extra coding, etc.
This is important in order to understand where to invest in code
optimization when dealing with application performance issues. Areas on
which to focus attention:
- Identification of the application web pages visited, their rendering time and isolation of slow screens that should be reengineered;
- Identification of slow queries that may require optimization;
- Performance assessment of the web services and APIs being used by the application to provide access to external systems;
- Review of application error logs;
- Centralized analysis of all application feedback including performance logs, audit logs, etc.
- Enhanced capabilities for end-user collaboration to gather feedback on application usage. This
capability is critical in order to address incorrect application
behavior and improve usability and adoption. Areas on which to focus
- Provide the end-user with the means to give feedback on the application behavior. This process should be intuitive for users and result in unambiguous feedback for the maintenance team;
- Streamline feedback management to effectively classify, prioritize and transform the user feedback into manageable backlog items.
- The availability of enabling technology which supports rapid change and boosts delivery efficiency. Areas on which to focus attention:
- Provide adequate application documentation when planning for evergreen applications. This is because the person performing maintenance tasks is typically not the same as the one who developed the original application; therefore it is critically important that the application's structure and logic is easy to understand;
- The development tools must make it easy to respond to change by supporting the change process. In our experience this includes tools that automate impact analysis and isolate change , which in turn, minimize testing and improve overall development team productivity;
- The application configuration and build environment must be automated to remove errors and support a rapid deployment of changes to test/production environments.
How OutSystems Helps with Going Evergreen
OutSystems' Agile Platform and project management tools provide the technology needed to shorten delivery cycles, increase software development agility, project predictability, responsiveness to business change and overall development productivity. For Evolutionary Maintenance teams, OutSystems technology addresses the key challenges faced by maintenance teams when going Evergreen:
- The Agile Platform automates the logic required to collect performance data and audit information. In addition, it supports the automatic logging required to address performance tuning, including monitoring third party integration done via the platforms Integration Studio toolkit;
- The Agile Platform's Service Center tool provides the necessary functionality to analyze performance data and audit logs collected from the application during run-time;
- The Embedded Change Technology (ECT) and the Agile Network's Issue Management tool ensure end-user collaboration :
- ECT provides an easy way to collect application feedback from testers, end users, etc. ECT provides meaningful information based on feedback directly from the application screen, and all feedback is automatically registered and stored in the Issues Management tool for the project;
- The Issue Management tool handles end-to-end issue management, from capturing user feedback, to issue classification and prioritization. Issues can then be included in the maintenance release backlog for delivery.
- The Agile Platform's TrueChange™ engine supports automated impact analysis and self-healing. This streamlines the change process and assures that quality applications are delivered. Self-healing is unique to the Agile Platform and is made possible by the fact that development is done in a model-based environment. This allows the platform to understand and manage all changes across all the development artifacts. For example, a simple change request to "add a new field to a Web page form that is only visible to a Sales Manager" might require changes to the User Interface, Data Model, external Service APIs, Access Control rules and Business Rules. With its full reference checking and self-healing capabilities, TrueChange™ safely rebuilds the sections of the application that can be automatically inferred, and provides the Business Developer with impact analysis on any manual changes he or she may still need to make to fully address the requirement;
- The Agile Platform's 1-Click Publishing streamlines the configuration management, build and deployment process - effectively reducing common build errors and the typical build and deploy time to minutes.
Evolutionary approach to Maintenance is key to keeping your
applications evergreen and delivering the best business ROI. As you
expand your teams' Agile processes, you can apply the Evolutionary
Maintenance phases defined in Part 1 of this article across a variety
of different project types and architectures. To be successful you will
need to overcome the technology challenges described in Part 2.
Be sure to review how OutSystems can help for your web business applications!
Evergreen applications are business applications that continue to evolve as the business changes and therefore provide constant value year after year. At OutSystems, virtually any solution that is deployed using Agile methodologies is required to stay evergreen and subject to a continuous Agile Evolutionary Maintenance process. Part 1 of this article explores the main phases of Agile Evolutionary Maintenance while Part 2 identifies some key technology challenges you must overcome to deliver on the promise of Evergreen Applications.
Setting the Stage
Once your application goes live, it immediately enters the Agile Evolutionary Maintenance phase and will stay in this phase in order to keep up with changing business needs and to address application inefficiencies that are only identified with "real life" demand and conditions. To successfully deliver on this concept, the Agile Maintenance team schedules regular sprint releases focused on delivering the "next most valuable features" for the business.
Through this sequence of regular Evolutionary Maintenance releases, the application will keep up with business needs while minimizing the impact of change on the business. The underlying principle is the same as for general Agile development - fast deployment of the most valuable features, delivered on time and aligned with the business.
The Agile Evolutionary Maintenance Cycle
The Agile Evolutionary Maintenance process is a continuous cycle of application releases. Each cycle includes the following steps:
- Gather defects, user feedback and new business needs for the live application. This step is actually
a continuous process that normally occurs in parallel with the applications' release process;
- Classify and prioritize all the feedback collected;
- Convert the classified feedback into project backlog items, agreeing on the next release content.
The focus should be on the most critical defects and highest value change requests;
- Develop and deploy the Backlog agreed for the next release.
Evolutionary Maintenance releases depend on the volume and type of change required. In my experience these releases fall into one of the following three categories:
- Hot Fixes - quick intervention required to fix a critical defect that is dramatically affecting business operation, and deploy it immediately. These interventions are required to be completed in less than 24 hours and in some cases during the next couple of business hours.
- Sprint Releases - made up of one sprint, lasting one or two weeks. In Agile Evolutionary Maintenance it's common to set regular short releases to keep answering accumulated user feedback and correct non critical defects.
- Evolutionary Projects - When there is a more substantial business change, then a specific evolutionary maintenance cycle may be extended to a small/medium project with several 2 week sprints. This is a common approach to deploy a very large application: incrementally release extra functionality that has not made it into the initial agile project. Experience has proven time and time again that waiting for the perfect and complete solution leads to a less than expected return on investment due to constant changes in business scope and delayed benefit from the new application features.
The Agile Evolutionary Maintenance cycle is critical to keeping your applications evergreen. Following this approach will dramatically improve overall return on investment over the life of your application.
In Part 2 of this article I will identify some key technology challenges and how you can overcome them to deliver on the promise of evergreen applications.