For the last couple of months, it seemed a few of our core in-house applications weren't behaving as well as they should... Some were running a bit slow, and we experienced connectivity problems related to our ISP.
This was serious because these are core systems running on top of the OutSystems platform, and include our website, our community, our partner network, our product management tool, and a host of other internal applications.Since responsiveness and availability are two fundamental traits of a great app, we had to do something about it. We decided to move our infrastructure to the cloud, in particular to Amazon EC2. And because these were core apps, we had to do it with
The Old Infrastructure
The infrastructure we wanted to move to Amazon was made up of two separate OutSystems production environments - we decided not to move the development and QA environments. These environments and their applications were built using a Service Oriented Architecture to communicate with each other, and to integrate with third-party apps like Pardot and Salesforce.
Another part of the challenge was that these systems were used by internal OutSystems applications, deployed in different infrastructures - including business-critical integrations with the R&D process. This raised some security concerns and some challenges on how to keep supporting these integrations with minimal changes to the architecture.
The New Setup
Since the machines about to be replaced were near retirement, and because we could add machines as needed on Amazon, we decreased the number of servers from 5 front-ends to 3. We had an additional machine with SQL Server and, at the time of writing, all machines were m1.large (2CPU, 5 Amazon ECU, 7.5GB RAM). To distribute load between the Amazon front-ends, we used Amazon Load Balancer.
Preparing for the Big Move
To perform the move, we used an approach similar to the hot-standby approach suggested to our customers. The idea is to setup an infrastructure on the cloud, and use the disaster recovery option to move control to this new infrastructure. For this, we had to:
1. Limit to External Access
We made sure all users were accessing the in-house systems from the outside network, just to make sure that everything was in fact available. This identified certain content that needed to be accessed via HTTPS, for security reasons.
2. Establish a SQL Mirror
We had to set up a database mirror for the SQL Server. This was a big step, considering we moved 120Gb to Amazon -- a transfer that took two days to complete. And since our internal infrastructure kept working, by the time it was done we had an extra gigabyte to move. Also, being two remote installations of SQL Server, we had to setup a secure mirror with certificate authentication -- which had its own caveats.
3. Install the Front-End
We then installed the front-end servers in the cloud, and connected them to the in-house database. This was easy, since OutSystems already provided Amazon images. The trickier bit was to ensure a secure connection with our internal systems, something we achieved using OpenVPN.
4. Run Automated Tests
Finally, we tested to make sure everything worked. We used some automated test scripts, and took the chance to identify dependencies with internal systems. To fix these dependency issues, we had to move some components back to the internal data center, and connect everything using web services on top of HTTPS.
Fortunately, this was trivial to do with the OutSystems, as it simplifies the development of web services and service oriented architectures. The only thing we couldn't test was performance... after all, the database was still in our data center, 3,000 miles from our front-ends.
Pulling the Switch
With everything prepared, it was time to go full cloud. Here's what we did:
- We turned off the in-house systems. No problem here, just had to put an unavailable page and stop the OutSystems platform and IIS.
- Next, we brought the cloud database server up. We were a bit worried with the time it would take to sync our SQL Server mirror after the rotation, but it only took about a minute. It took less than 5 minutes to have everything ready and to configure.
- We reconfigured the OutSystems platform to use the cloud database. No problems there, we just had to launch the configuration tool, modify the database settings, and restart the platform.
- Finally, we had to change the DNS servers to point to the new infrastructure location.
In the end, we achieved our goal: our core applications ran faster, and in time we'll be able to measure how connectivity improved since moving to the cloud.
It's amazing how fast we were able to make the move. The procedure took 4 hours, but it could've easily taken 30 minutes if it wasn't for DNS. DNS ended up being the big bottleneck on the whole process, due to the time it takes to propagate. This is something we'll need to do differently if we repeat this operation in the future.
In the end, the work was made a lot simpler thanks to the OutSystems platform. Not only because it's prepared for these redundancy server scenarios, but also because it's easy to change configurations and have everything running with new settings in no-time.
More importantly, the platform was capable of splitting the components we wanted to keep in-house, and glue everything together with an SOA architecture. Moving quickly and securely opened a world of possibilities.
Are you planning to migrate your existing applications to the cloud?