Fantastic* Frameworks and How to Build Them: Tools and Technologies
Co-authors: Bruno Martinho, Gabriel Lundgren, João Ferreira, Mariana Bexiga, Rúben Gonçalves and Tiago M. Pereira
Low code platforms, like OutSystems, greatly contribute to the democratization of app creation to solve real-world problems, and components are essential for this. Besides the obvious code reusability, they enable developers to build software faster without reinventing the wheel while (idyllically) simplifying the usage of said code or library.
Frameworks and libraries provide reliable and reusable code, allowing the developer to focus on app functionality. This is why good frameworks accelerate the development process, and the best ones are easy to use.
In this article, we’ll explain how a good component framework can save brain cells and make life easier. We’ll also get into the very first aspects you need to consider when building a framework — tools and technologies. And to illustrate just that, we will tell you a story based on a real-life problem.
But, before we dive in, know that this is the first on a series of posts on how to build a fantastic* framework. If you want to learn more on how you can divide and organize work, create a diagram that works as a decision matrix to help you build new components and improve the developer experience, among many other things, follow us on Twitter and we’ll keep you posted on upcoming articles.
*From dev to dev: sorry, but our editing team made us use this adjective for SERP (Search Engine Results Page) purposes…
Note from the editing team: The word grew on them, and look how high they’re flying now on your results page.
Who Are We, Anyway, and Why Are We Building a Component Framework?
Alright, let’s start with a little context.
Formed in February 2020, the Data Visualization Team was born out of the need to stabilize one of the OutSystems components, the Data Grid Web, and support its users. Data Grid Web allows developers using the OutSystems platform to build complex, scalable, and data-rich interfaces for apps. As a result, application users can securely view, explore, and edit large amounts of data in an Excel-style user interface.
Starting a Framework? Pick Your Tools
A wise man once said — actually, it was our Team Lead, Bruno Martinho — that pain leads to change. We were about to experience that first-hand.
As we rolled up our sleeves and began toiling away, we struggled — the effort required to maintain the existing code was excruciating (let’s not even mention adding new features).
Why? Due to the traditional way these components had been developed (there were a lot of difficult circumstances and constraints), altering them required wizard's hands, black magic, and above all, plenty of time.
There was virtually no automation. Processes were primarily manual and error-prone. Imagine, just imagine, several people bug fixing and working on features in copies of a multi-thousand line JavaScript file — kept in OML — and then going for a manual merge, which of course, was done in Notepad++. The result was what one would expect, with multiple defects caused by merge mistakes, code losses, and worst of all: a lack of trust and confidence from customers.
As we are lazy (and strong contenders*), we decided to change things, starting with the obvious — tools — and forbidding ourselves from ever doing something similar, or else we’d be bound for some harsh physical punishment**.
* Yes, yes, we did notice the apparent contradiction.
** No developer was physically harmed.
Things turned out well, so after we stabilized the Data Grid Web component, we applied the lessons learned to its runtime Reactive counterpart, which we built from scratch. Then, as our team grew, we consolidated our framework while redesigning two other reactive components — OutSystems Charts and OutSystems Maps.
When you’re building a new framework, the first thing you need to do is pick the appropriate tools and technologies. So that is where our story will really begin. Here’s what we learned.
Good Old Git
Most of the issues we told you about began right when we started tackling our web component and can be summarized by the old pasting code motto.
We have, however, a new mantra for you. Repeat after us:
- Copying and pasting JavaScript code is bad.
- Manually merging JavaScript code is horrible.
- Maintaining JavaScript in an OutSystems module is worthy of punishment.
Yet, as you now know, that was the problem we faced.
To make the process more agile and our lives easier, we needed the right tools. And sometimes, the easiest solution is right in front of you. So, the first thing we did was move our JavaScript code to GitHub, which instantly solved many of the issues we had. Faster merges? Check. Code traceability? Check. Code reviews? Check.
Performing code reviews was nearly impossible before as we had no means to compare what had been added to the code and were, therefore, unable to track changes properly. Git got that out of the way. Bang.
Our spirits were lifted, but we weren’t completely satisfied, so we turned our heads to our next big pain. To deliver a feature, you need to have a sandbox, so that you can fix, test, and generally mess around with your code without standing in the way of other developers (or the other way around).
We needed the ability to create a new application that contained a replica or clone of those modules but there’s no magic button for that...
Or so we thought. The plot thickens.
Saved by the API
Meanwhile, some of our colleagues encouraged us to use TypeScript, so we decided to convert our JavaScript code base into TypeScript. We thought it would be a much more complex move, but we benefited simply by changing the extension from js to ts — and starting to use Visual Code. With these changes alone, we could unleash TypeScript’s potential, our code could be compiled, and we could start catching mistakes even before they’d get to the browser.
With Git and TypeScript, merges were now mostly automatic, meaning that two engineers didn’t have to waste a whole eight-hour day copying and pasting code — and then validating the copying and pasting — every time we did a release.
But, for a team who likes to be as efficient as possible and save time on tedious tasks to focus on what we really enjoy doing, it still wasn’t good enough. Setting up a sandbox took about an hour because the process involved cloning three or four different modules and then changing all the references to the right modules.
There was indeed no magic button, but instead, two magical, underdeveloped, experimental, and fantastic APIs.
And you know the saying, “Give a developer an API, and they’ll create a whole application around it” or something like that. We got not one, but two APIs! JUST PICTURE our happiness!
Using these #blessed APIs (our codename for them, as we are not authorized to reveal their names), we created a new app that allowed us to clone entire applications, change the references to the cloned modules, and publish them to our environment.
But, much better than that, we designed it with the concept of “templates.” What exactly is this? A “template” is a virtual app that features all the modules (which actually belong to different apps) that a developer needs in its sandbox to build a new feature or fix an issue (in our case, we needed the DataGridWeb, DataGridSampleWeb, DataGridExample, DataGridTestsWeb modules).
The result? Our one-hour-long step to set up the sandbox, now became a five-minute step, including the login time on our app and on Git to create the branch.
All Hail Pipelines
With things finally on the fast track, we took another step towards automation. We didn’t want to compile the TypeScript code in our machines only to realize that other devices would compile the same code differently, thus generating problems we wouldn’t pick up on easily.
So, using Azure DevOps, we created a new pipeline that would compile our TypeScript code after each pull request, so that the final code was available to download and insert in the module. That ensured that the released code was always compiled under the same conditions.
However, we still had to open the DataGridWeb module, add the latest version of previously compiled code, and of course, open and update the consumer modules. Not good enough for our books.
How did we skip this step? By now, you already know the answer… Using the #blessed APIs, we created yet another app. This application begins working its magic when Azure finishes compiling the code.
We summarized its inner workings in this very plausible monologue: “OK, the code is compiled. This is the JavaScript code, this is the CSS code; now where shall I put it? Oh, I must send it to this environment, so I will open this module, place the JavaScript and CSS code folders inside, publish the module and, obviously, also publish all the other modules that use the same code.”
“Holy cow!“ you might say (as we did back then). This means that when a pull request is accepted, the TypeScript code is compiled to JavaScript, which is then sent to our environment, inserted in the correct module, published, and the consumer modules refreshed. Hypothetically, every time a pull request is accepted following the code reviews, our components are ready to be uploaded to the Forge by the time the app completes its job. Who needs magical buttons, when you can create buttonless magic?
Once two people accept the pull request, the code will be merged, and the magic we described kicks in — we are now down to a 10-minute process.
What About Testing?
While shortening the whole process was a much anticipated happy ending to this story, using the right tools and technologies has brought us other benefits from an engineering perspective. Testing is one of them.
Thanks to our Azure setup, we now have a whole new pipeline of automated tests (using a framework that our R&D Team built, but that is another story). Every time there’s a pull request, the tool will run hundreds of automated tests (462 x 4 browsers, to be precise) in the respective sandbox of the issue to validate the changes’ impact. That is particularly useful for those carrying out the code review, as they will immediately find out if the changes cause collateral impact or not.
In order for the pipeline to be able to execute the tests, we created a naming convention for the sandboxes and the branches in Git (e.g. RGRIDT-0000 — yup, JIRA issue number! :D). So, every time there’s a pull request, the pipeline knows what to do: it knows which environment it must access and will then find the sandbox with that specific name, and run the automated test list. It doesn’t get any greener than this.
If at this point you’re wondering why we’re so obsessed with automation, it’s easy to explain. When you’re a developer programming on your own, you probably won’t face these challenges. You’re in charge of all your code, and you perform one task at a time. However, if you have three or more developers working simultaneously on the same component, it’s easy to understand how things can go wrong.
You change the code. You run the tests. You get the all-green go-ahead. As simple as that, and everyone is happy.
The End (at Least for Now)
The main takeaway from our story — and which only gained momentum since we first tackled this project — is that building components should be done with the mindset of building a framework.
And for that, you will need to choose the right technologies and the right tools. While it may be tempting, don’t just stick to what you know, because when the only tool in your toolbox is a hammer, everything looks like nails. Remember that this is only your first step, as you’ll need to look into many more things to build your framework.
As we mentioned, we will address them in other articles, but you can also catch up with our team member, Rúben Gonçalves, in his talk “Building Frameworks: From Concept to Completion” at the OutSystems Developer Conference 2021, now available on-demand. He’ll be there to answer your questions and dive into the complexities of building frameworks.
For now, if this looks like a lot of work, know that it is. But...