How do you add additional front-end servers to the agile platform?

How do you add additional front-end servers to the agile platform?

  
How do you add additional front-end servers to the agile platform?
Hi robert, 

did you already check the install checklist usuaaly available @ outsystems folder in your "start  >> all programs>> outsystems>> Online Help and Resources >> Installation Checklist

Cheers, 
RNA
This post might be ages old, but is it possible to get some new answers on the same line of question?

Platform v9.0.0.11

Have installed 2nd Node, have configured NLB on the 2nd Node. Have not added the 1st Node to the NLB Cluster yet.  Installed 2nd Node by importing config as checklist says, have disabled Deployment Controller Service as the checklist says; said yes to ServiceCenter install...

So, questions...

Logging into Node 2 via ServiceCenter, there are plenty of errors re Deployment Controller Service not running and there is a license error...  Logging into Node 1 I can see Node 2 as a registered Front-End server.

In Windows NLB you only have the options of directing certain ports to different machines or load-balancing them, no possibility to force /servicecenter to Node 1, connecting via NLB might end on Node 2 and thus get the errors mentioned above...  If ServiceCenter is not supposed to be installed on Node 2, how is OS configured to always point ServiceCenter to Node 1?  Any ideas?

In Node 1, error lists are pointing towards Deployment Service Controller errors on Node 2...  When the install checklist says "Disable the OutSystems Deployment Controller Service", is this meant to be a permanent Disable or is it meant as a temporary disable while installing?

Anyone want to comment on the procedure for configuring Lifetime for both nodes?  How is deployment done in stages if need be?  How is upgrades to the platform done? (Need to upgrade to v9.0.0.19 soon to help with LifeTime errors anyway)

Thank you in advance!
1) Does every server need to be running the same version of the platform? 2) What happens when you end up with 10+ or 10,000+ servers? Is there a easy/automated way to configure the servers from a central control point, portal?... without having to login to each individual server to manually configure them?
Riaan -

Make sure that the configuration for Node 2 has the IP address for Node 1 as the "Controller". If it says "localhost" or "127.0.0.1" or it's own IP address, that will explain a lot of the issues that you are seeing.

J.Ja
Hi Riaan,

In short, when you install a second front-end (following the mentioned checklist), it will register itself (contacting the deployment controller server - DCS - according to the configuration tool). It will then retrieve and deploy locally all available applications - including Service Center (which is why you should be able to access SC on *any* front-end).

There's only one DCS per environment. That is, only one server has this service enabled (answering your question, this is not a temporary setting).

There are several things that can go wrong though - operating system settings, firewall, platform settings... for instance, I'd advise you to setup the platform first and then worry about the NLB, so you focus at one problem at a time. Some hints:

  • What do you see in SC's Environment Health (in both nodes)? Both nodes/services up and runnning? Are there services reporting errors?
  • Exactly which errors are you seeing in SC?
  • Also important, did you check for errors in event viewer?
  • After exporting + importing the configuration tool settings, did you change anything? The connection string has to be the same in all nodes, so if some front-ends use an IP for the database while others use the hostname, this scenario won't work.
  • The ports in the configuration tool (12000-12004) need to be open between front-ends - check your firewall settings (use telnet to test if needed).
Regarding the upgrade, the installation checklist covers that as well (be sure to select the correct scenario at the top).

@Robert

1) In each environment, you should upgrade all front-ends at roughly the same time, yes. Ok, the platform is probably a bit more forgiving in most scenarios if you don't, but why would you want to have different platform versions anyway?

2) OutSystems cloud servers are managed in an automated way. This was achieved by integrating services such as the ones made available by the infrastructure provider (for starting / stopping servers, for instance).
Robert Chanphakeo wrote:
1) Does every server need to be running the same version of the platform? 2) What happens when you end up with 10+ or 10,000+ servers? Is there a easy/automated way to configure the servers from a central control point, portal?... without having to login to each individual server to manually configure them?
 Robert, based on every single environment I ever worked on, having the same version running everywhere saves you a load of midnight hours.  As for multiple servers, currently running 9 servers, (different solutions, some dev/tes environments; this was the first to be configured in a farm), with 5 more to come online both for new solutions and redundancy for others.  I don't think the setup was too bad taking into account that we are building out 1 solution at a time; can see issues when you reach 10+ servers for a single solution if there is no automation, read below...
 
Justin James wrote:
Riaan -

Make sure that the configuration for Node 2 has the IP address for Node 1 as the "Controller". If it says "localhost" or "127.0.0.1" or it's own IP address, that will explain a lot of the issues that you are seeing.

J.Ja
 Justin, thank you! The installer checklist doesn't say anything about the deployment IP, and because this was my first farm install I followed the checklist down to the letter, shame on me... Environment now stable!

Paulo Ramos wrote:
Hi Riaan,

In short, when you install a second front-end (following the mentioned checklist), it will register itself (contacting the deployment controller server - DCS - according to the configuration tool). It will then retrieve and deploy locally all available applications - including Service Center (which is why you should be able to access SC on *any* front-end).

There's only one DCS per environment. That is, only one server has this service enabled (answering your question, this is not a temporary setting).

There are several things that can go wrong though - operating system settings, firewall, platform settings... for instance, I'd advise you to setup the platform first and then worry about the NLB, so you focus at one problem at a time. Some hints:

  • What do you see in SC's Environment Health (in both nodes)? Both nodes/services up and runnning? Are there services reporting errors?
  • Exactly which errors are you seeing in SC?
  • Also important, did you check for errors in event viewer?
  • After exporting + importing the configuration tool settings, did you change anything? The connection string has to be the same in all nodes, so if some front-ends use an IP for the database while others use the hostname, this scenario won't work.
  • The ports in the configuration tool (12000-12004) need to be open between front-ends - check your firewall settings (use telnet to test if needed).
Regarding the upgrade, the installation checklist covers that as well (be sure to select the correct scenario at the top).

@Robert

1) In each environment, you should upgrade all front-ends at roughly the same time, yes. Ok, the platform is probably a bit more forgiving in most scenarios if you don't, but why would you want to have different platform versions anyway?

2) OutSystems cloud servers are managed in an automated way. This was achieved by integrating services such as the ones made available by the infrastructure provider (for starting / stopping servers, for instance).
 Hey Paulo,

The issue came down to the checklist not pointing out that the deployment controller IP should have been changed to the 1st node...  The moment I fixed that and restarted the services, Node 2 came up without any issues.

RE the ports, checklist also doesn't say anything about that, is that TCP or UDP? Inbound or OutBound ? I reckon TCP will do, as it's currently working in my environment.

As for upgrading "all front-ends at roughly the same time"...  Fair enough that OS is "a bit more forgiving", but that doesn't solve the issue of scalability.  

The thing I'm trying to understand is two-fold:

Should I be logging into the 1st server, upgrade the platform, install new solutions pack, (install new lifetime), log out, log into next server and basically rinse and repeat the platform upgrade?  Is there automation switches perhaps where I could push a script after the first server has been done?

How is the new farm displayed/configured in LifeTime?  I see there is a 2-stage deployment where upgrade of DB and Server is split, could that be used in a big environment to push newer versions of the platform?

Due to limitations on OS Cloud (Amazon issue - not OS's fault) we cannot run our infrastructure in any cloud but our own purpose-built cloud...  Would appreciate if OS could give more information through to the guys running similar environments...
 
Hi Riaan,

Glad you solved it!

I'm not aware of available automation switches beside creating a snapshot of the server (if you're using a virtual environment). This can save you some time after configuring the 1st server, but it will still involve manual steps afterwards... Of course, in the end these manual steps could also be automated, but it's a complex task. Fully automating the installation / upgrade of the platform would be great, but we're not there yet (except in the OS cloud), as it's hard to cope with a diverse set of configurations, and more and more people are moving to the cloud (disclaimer: this is my personal interpretation).

As far as I know, it's mostly irrelevant to Lifetime if each environment is operating a farm or single server. LT manages the deployment of your apps across environments in the same way. It's actually the responsability of the DCS (deployment controller server) to orchestrate (per environment) the deployment in all front-ends.

Hi Riaan,

The server upgrade procedures are explained in the balanced upgrade technote document: http://www.outsystems.com/Community/download.aspx?DocumentId=787

In a quick summary you turn the load balancer off from the controller server (and at least one of the frontends, in the scenarios where the controller in not on a frontend node) and turn off the deployment service on other frontends. Then fully upgrade that server and republish the factory there.
Then you reverse the load balancing and upgrade the other servers (can be progressive, doesn't need need to be all at once if you have more than 2 frontends).
Once the deployment service if turned on each frontend, the upgraded applications will automatically be pushed from the controller node, so there is no need to repeat the publish process on any more frontends.

Regards,
João Rosado
Hi Riaan,

regarding the issue you had with 127.0.0.1 address on the deployment controller address tab of configuration tool:

This address MUST be the same on all front-ends! This is very important since configuration tool writes this value to the database and it will be read by all frontends!

There have been several situations in the past where customers use 127.0.0.1 in Dep. controller and the controller address on the frontend. This works as long as the frontend is the last one to run configuration tool with Apply & Exit (i.e. the last one to write configuration to DB).

If you then run the configuration tool on Deployment Controller node for some reason it will write 127.0.0.1 to DB and the next time the frontend reads configuration from DB it will be broken (unable to reach Dep. controller on 127.0.0.1).

This is why, in this field, you should use a hostname or IP address that all nodes can resolve/use to reach the deployment controller.

Regards,
João Atalaia
OK, so then...

Deployment Server & Front End (webnode01) config:
  • Controller -> webnode01
  • Network -> Internal IP is blank, Front-End Reg. is automatic
Secondary Front End (webnode02) config:
  • Controller -> webnode01
  • Network -> Internal IP is blank, Front-End Reg. is automatic

Everything looks ok so far, but Front-End registration in ServiceCenter, webnode01 's IP is now listed as ::1 which is a (!*localhost*!) IPv6 IP, which would be awesome apart from the fact that IPv6 has been completely disabled on the server, static IPv4 IP's have been used.  Even checked the NLB and that also is only configured for IPv4...

So, back in ServiceCenter/Admin/Front-End Servers, I edited webnode01 and changed the IP to the static IPv4 IP, save, SC says everything looks ok...  great stuff!   Webnode02 is now listed in Front-end Servers with the correct IP, I can externally confirm site is running and SSL is working, great stuff.

Grab an app espace and rupublish running version, check General logs in Monitoring, deploy happened without errors on both nodes, one after the other.... Site still up and running and everything is great, so far...

But then a nice new error pops up:

Front-end Server webnode IIS Status: Error contacting application: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

Checking Environment Health, now webnode01 is showing IIS as down, webnode02 is running fine...

Check IIS on webnode01, SSL bound to port 443 on all ip's, no host name checking....  For all intends and purposes the site is up and running and SSL is live and doing it's thing...

Check IIS on webnode02, SSL bound to port 443 on all ip's, no host name checking...

Same certificate was used, so it cannot be that...

Wanna hear the Ace in the hole?   The moment I go on webnode01 and go change Front-End IP to automatic, it reverts itself back to ::1 (IPv6 localhost same as 127.0.0.1) and then IIS is happy again...

What am I missing?
As an added bonus, when I try and change the Front-End IP of webnode01 back to ::1, I get told IP Adress must be like 999.999.999.999 ...

If I change Front-End IP of webnode01 back to 127.0.0.1, everything in Environment Health is back to normal...  Republished the app, no problem deploying...