At Your Service: Implementing Low-Code Microservices
Huge, complex, out-of-control portfolios can impact the speed of continuous delivery and slow application and even business growth to a crawl. A healthy architecture that’s flexible enough for fast continuous delivery is the answer. If there’s something OutSystems takes pride in, it is our ability to respond to market needs. The popularity of microservices inspired us to improve our reference systems and, consequently, impact analysis. With a microservices-based cloud-native architecture, we could make the architecture of OutSystems portfolios more visible and flexible.
But to stay healthy, the architecture of a portfolio needs to be able to evolve, and an important part of that is breaking monoliths into smaller, more manageable, and easily evolved pieces. And that’s possible by decoupling things, allowing different teams working on different domains to deliver at a faster pace independently. And this is how we get to domain-driven design (DDD), smarter impact analysis, and the power of low-code microservices.
Decoupling things is not straightforward, and it actually brings a whole new set of challenges. So, how did we empower OutSystems to allow users to make this as easy and productive as possible? With something called service actions, which is key to developing low-code microservices. If you’re using OutSystems 11, you might have seen it. Read along to find out how we got there.
Design Stage and Initiative Planning
To start, we planned a design session and set off on a research spree. We prepared scripts and conducted dozens of interviews. We thought of relevant personas and set up relevant scenarios they would encounter while addressing matters such as refactoring, and factory evolution. We measured the impact of changing reusable public elements with the current OutSystems reuse and composition model in real, large, and mature OutSystems factories.
Once that was done, we went home and relaxed. Just kidding. If only it were this easy. To be honest, we threw a bunch of people in a room, locked the door, and threw away the key, which resulted in an intense and invigorating design session. In the end, we had ideas and prototypes we would explore as we searched for the final design of the perfect solution.

After all that, we realized that although we were on the right track regarding all things architecture, we could also greatly improve continuous delivery by redefining our current dependency system. This resulted in a twofold initiative: an improved dependency system and low-code microservices.
Rethinking Architecture
To improve architecture visibility, guidance, and flexibility, key architecture concepts needed to move closer to the users and they needed easier ways to map the structure of the applications they were building, refactoring, or both to the principles and best practices we advocate.
Weak Dependencies
To enable the development of low-code microservices, some kind of weak dependency was needed to provide increased flexibility in architectures by defining loosely coupled relations between portfolio applications while still providing typical OutSystems productivity.
Weak dependencies are more flexible. In runtime, the consumer module only needs to know the signature of the element to be able to reuse it. So, when the module in which the referenced element is implemented (producer) changes only the implementation of the exposed elements, the consumer module does not become outdated, and the latest producer version is immediately available to the consumer in runtime.
Service Modules and Service Applications
Two new concepts allowed for better organization, and in turn, healthier architectures: service module and service application.
Service modules are OutSystems modules that enforce a higher level of functionality segmentation, with a focus on functionality reuse. In these modules, we restricted available elements to those the users' core services might be composed of. Service Studio changed; only relevant tabs and valid elements show in the Toolbox, and only valid elements are referenced.
In the case of service applications, they should be used when organizing service modules, which is the only type of module these applications can contain, thus providing guidance on architecture best practices.
Users also needed a way to convert the core services built using the existing module types (web and mobile) into service modules. And, they would also need the ability to turn traditional applications into service applications. This would allow them to fully take advantage of the new features in evolving their architectures. So now Service Studio’s functionality has two new commands.
Are you wondering why we did this in Service Studio? Well, it’s where you do refactoring and where we already had other module manipulation functionality, such as module cloning.
Breaking the Right Things Apart
And what about breaking the monolith? Well, that was not without its challenges as well. First, there was figuring out how the OutSystems Domain-Specific Language (DSL) would evolve to enable decoupling by introducing the low-code microservice concept. A set of requirements, some brainstorming, and some iterations resulted in these viable options:
- An evolution of the REST API Method: The idea here was to extend the current functionality of the element by adding a property that enables defining whether it could be called only from the application’s environment or from anywhere, like traditional REST API methods. It would also mean making some improvements to REST API methods to have TrueChangeTM and Impact Analysis.
- An evolution of the (Public) Server Action: In this case, adding a toggle property to public server actions would change their behavior to services (loosely coupled) and extend the current server action functionality. The action would keep all the characteristics of a public server action but be loosely coupled, meaning that, in runtime its logic would be executed out-of-process.
- A New Language Element: This would be an entirely new addition to the OutSystems visual DSL and would represent pieces of decoupled and reusable logic. A new element with the flexibility of a REST API method and the productivity of a server action.
We evaluated these options using the 80/20 rule, one of the seven rules we commit to following at OutSystems, to guide us. We thought how they would impact the experience of OutSystems developers, and the scale of the investment required to bring the change to the platform. We considered things such as semantic transparency, as well as everything that we would need to design, build, or change under the hood.
When framed in this particular scenario, REST and public actions performed poorly for semantic transparency. Adding a new language element performed well, so, even though it would take more effort, it had to be done.
Creating a New OutSystems Language Element
The new language element needed to provide great productivity. So, it could not disrupt or be too different from how OutSystems developers currently reuse application logic or how we provide server actions. So it was designed so that it could be created from the Service Studio elements tree and to be discoverable and potentially consumed from the Manage Dependencies window, just like a server action.
On top of that, we made sure it would be the target of impact analysis so that developers are always notified of breaking changes. The element had to allow decoupling, so it was designed to generate weak dependencies. Only changes to the signature can generate breaking changes, and changes to the implementation don't leave consumers outdated.
Pilling on the productivity, the new element also benefits from strong typing, meaning developers can use public entities and structures in the signature. So when consuming it, no additional structures are created (like it happens when consuming REST); instead, the original types are just referenced.
Last but not least, exception handling and platform authentication were also taken into account.
Learnability and Tweaking
This new element was very close to server actions, and the differences between the two were not always apparent. After conducting a set of usability tests, we had confirmation. This meant that service and server actions could be used interchangeably, facilitating change, but with no regard for the challenges and pitfalls of decoupling logic (for example, dealing with the network or database transactionality), which was something we didn’t want to happen.
So we experimented a bit with things like the element’s name, icon, and placement in Service Studio’s elements tree:

For the icon, the product design team’s inspiration was the “ball and socket” UML representation.

People from all kinds of backgrounds discussed the name of the element, considering things such as services, APIs, and loosely coupled logic. Placement was also considered. This was not a traditional integration (like SOAP or REST) and was also not just a good old server action, in the end, it got its own folder.

You know where this is going, right? The element was named service action. It lives happily in a Service Actions folder that is placed between the Server Actions and Integrations folders in the Service Studio’s elements tree, and it even has its own identifiable icon.
So, what happens under the hood?
REST For the Wicked
So how could we have service actions executed out-of-process, meaning remotely executing logic? After research into what others were doing in the world of microservice architectures and technology, we defined evaluation criteria for choosing the technology that would better fit our goals. Here is a short summary of the comparison we made:
In the end, we decided to go with REST for the following reasons:
- REST is widely used for enterprise scenarios.
- its performance is acceptable for most enterprise scenarios.
- OutSystems REST API implementation was developed using Web API which is flexible. Our solution would use current platform capabilities and extend it as needed.
- In the future, we could improve performance using binary serialization with the current implementation, if it came to that. We could use something like Protobufs (we even did a PoC).
Safe and Sound: Secure Communications
It’s not enough to communicate in a decoupled way; it has to be done securely. It was time to apply threat modeling techniques and map credible threat scenarios. After that came the design of a mitigation strategy, which among other things, stated we would sign each request with an authorization token. Just like a bag of super secret treats.
Regarding the authorization tokens, we went with JSON Web Tokens (JWT), which are an open, industry standard RFC 7519 method for representing claims securely between two parties.
But how would all of this go back and forward? Our payloads look something like this:
User-Agent: OutSystemsPlatform
Content-Type: application/json
Content-Language: <Locale>
Authorization: <Authorization Token>
{
inputParameters: {
<In1Name>: <In1Value>,
…,
<InNName>: <InNValue>
},
requestKey: "<RequestKey>"
}
Request.
{
outputParameters: {
<Out1Name>: <Out1Value>,
…,
<OutNName>: <OutNValue>
}
}
Response - success.
{
exception: {
name: "DatabaseException" | "UserException" | …,
specificType: "<ConcreteExceptionName>",
message: "<Message>"
}
}
Response - failure.
There are a few things worth pointing out regarding the payload structure:
- The service action’s logic is aware of the user logged in, maintaining the user context throughout calls.
- Service actions can only be used from the environment they were exposed in.
- Locale is preserved across calls, enabling users to use translations based on the caller application’s selected locale.
- User and communication exceptions can be trapped in the caller applications.
Closing Time: To the Future and Beyond
We learned so much from this process. Effectively creating awareness over all the challenges and pitfalls of decoupling things is a very complex matter and so there's still plenty of room for improvement.
So after months of new concepts and hard work, we finally went home and relaxed. Still kidding. This might seem like the end, but it’s actually another type of beginning. You see, now that we have OutSystems developers using low-code microservices, it’s time for us to collect feedback and see the results of our work. Then we’ll be right back at it again.
Further Reading
Want to learn how to use OutSystems low-code microservices? Check out the following resources: