With Machine Learning Builder (ML Builder), developers can train, deploy, integrate, monitor, and evolve custom AI and ML models without any data science experience and add them to their apps. Since the public announcement in the OutSystems 2020 NextStep keynote, we have been doing several sessions for our customers, partners and the overall community of developers. This blog post summarizes the answers to the most frequently asked questions about ML Builder. For information about what ML Builder is and what problems it solves, check out my other blog post.

About ML Builder

1. What Does ML Builder do in the background? 

Traditionally, to build and deploy a machine learning model, you must followed the steps in what’s commonly known as the Machine Learning Lifecycle:

  1. Define the business goal. What is the problem that needs solving? What do you want to predict?
  2. Understand the data. Is data available related to the business goal? Is it accessible?
  3. Prepare the data. This laborious starts with setting up the infrastructure to consume data. Next is data cleansing, which is detecting and correcting corrupt or inaccurate records, identifying incomplete, incorrect or irrelevant parts of the data, and replacing, modifying, or deleting the problematic data. Data cleansing is followed by data wrangling, which transforms and maps data from one "raw" data form into another format that is more appropriate and valuable for analytics. After that, feature engineering extracts features from raw data via data mining techniques. Features are all the attributes that feed the model  after cleaning, normalization, transformation, and it includes calculated fields (such as the average salary of an employee in the past year). When feature engineering is complete, there is data labelling, which is manual data curation.
  4. Fit the best model. This includes the model training of several experiments, which are the set of features and ML mathematical models.
  5. Evaluate results. Assess the KPIs of the experiments and metrics results, aka model performance in the test dataset
  6. Deploy the ML model. When a model is deployed, that means it can be consumed, for example as a REST API.
  7. Monitor. Evaluate model performance in production, model degradation, retrain, feedback loop etc.

ML Builder automates most of these steps using the ever-increasing processing capabilities and mathematical techniques available to infer and replace manual decisions made by data scientists. If your company has data scientists, their productivity is boosted because their data preparation work is drastically reduced.  If your company does not have data scientists or data engineers,  ML Builder accelerates the process of creating a machine learning model and abstracts most of the complexity for non-data scientist developers.

Using ML Builder

2. How much AI knowledge is needed?

Our north star is for any developer with no AI knowledge to be able to train, create, and deploy ML models and weave them into their applications. Although aspirational, it is based on  the observation that more and more developers are able to use AI tools and auto-ML technologies.

However, for any project using ML, developers should be familiar with these aspects of ML:

  1. Solution design: Definition of the business goal and what they want to predict
  2. Data preparation: Data cleansing, data wrangling, and feature engineering
  3. Model training: Composing the experiments of features and algorithms, training the model and evaluating its results
  4. Deploying: Making the model available to get predictions, evaluating its results and monitoring the model performance in production.

Evaluation is also critical because it might not be easy to discern or obvious whether or not a model performance is good enough. The simple definition of “good” it's a challenge on its own.

3. What is required to run ML builder? What needs to be set up, or is this all handled automatically by ML builder?

Currently, ML Builder is using Azure as the AI provider to run the auto-ML process and deploy the model. But we abstract all that complexity from the end-user. For example, in the setup stage, if there is no Azure workspace, the tool enables you to create a new one.

How to create a new workspace in the ML Builder

4. Is there any previous work required on the dataset before using it with ML Builder?

Yes! It might be needed because understanding the data is one of the key steps of the entire journey. Our recommendations are:

  • Making sure that your target data is clean (for example, having text inputs when you are trying to predict a value). This is mandatory.
  • Understanding which parts of the historical data is relevant (for example, a business process changed six months ago and now classifications are done differently, so don’t give that old data to the model to train with). This is mandatory.
  • Transforming some attributes to improve chances of better model performance. (Example: simplify a long text attribute into a canonical one). This is optional.

Generally, machine learning complies with the rule of “garbage in - garbage out.” However, more and more data preparation steps are being automated in the Auto-ML process, so I hope that in the future, the amount of data preparation needed will be dramatically less.

5. Which data processing mechanisms are already implemented to feed the ML?

During the data preparation step, all the attribute manipulations are made. For each attribute, the type is analyzed and new features are built that might help the model (this is called feature engineering). For example, for a date and time attribute, the day of the week, the hour, the month, and more can be extracted.

In this training step, data is split into train and validation sets using a technique called cross validation.

6. How should a developer use the model that was created by the builder?

It’s best to try to answer that question at the beginning of the ML lifecycle during the process of understanding the business need and defining the goal. However, depending on the model performance, you might want to adapt and change the way the predictions are used. Traditionally, there are three scenarios for this:

  • Full automation: All the models predictions are considered as true, and the business process is fully automated
  • Assisted decision: The model results are presented to the user and then the decision is manually done (either accepting or rejecting the prediction).
  • Mixed scenario: Predictions with high score (above a certain threshold) are used for automation and predictions with low score (below the threshold) are given a manual decision.

Often the recommendation is to start from assisted and gradually move toward maximum automation. It is also important to take these points into consideration:

The definition of the threshold: This is often a business decision as it directly impacts several business metrics.

The constant monitoring of the model performance in production: This is to decide whether a model needs to be retrained or if a new model is needed.

A way to feed new data back to the model, especially regarding the quality of its predictions: This is so the model can improve over time.

7. Can the ML models be used in web and mobile apps?

Once the model is created, it can be used in all applications, be they web or mobile. However, if this question is asking if a model can be used as an offline feature in native application, for now that is not possible as the model is deployed as a cloud service. We will constantly evaluate that need and decide future developments accordingly.

Machine Learning Capabilities

8. Can we select multiple attributes from the same Entity for prediction?

Yes. You can select one or more attributes, as both scenarios are possible. It all depends on your use case. Today, users can select between text classification or attribute prediction. The model built in text classification is specialized to process text and to infer all the semantics underneath it. Those types of approaches only work with text and, therefore, the approach used to deal with text in the attribute prediction type (where other types are allowed) is simpler and cannot extrapolate that much information:

  • In text classification models, only one text attribute is used to predict the target. Usually, that attribute is a field that contains the text.
  • In attribute prediction models, one or more attributes with different data types are used to predict the desired target.

How to select multiple attributes from the same Entity for prediction

9. Can I add new data to the model as time progresses?

As new data is gathered, you might be wondering whether to retrain or even redo the ML model you are using. The most important thing to consider before you take this step is how well your current model is performing. Based on that evaluation, you can decide what to do next.

Building a new ML with new data can be done in ML Builder. Retraining capabilities is something we are thinking about including in future versions as we gather more feedback from users.

10. Is the self-training process already taken into account by ML Builder?

Self-learning is sometimes confused with automatic retraining. They are different concepts:

  • Self learning is an ML technique or paradigm whereby an ML model learns without separate reinforcement input or advice input from the environment.
  • Automatic retraining assumes your model features are still valid and you just need to recalculate the algorithm coefficients.

Either way, to reply to the question, no. Self-learning is not yet incorporated as an out-of-the box feature of ML Builder. We are on the fence about self-learning in the context of what is important for solving the business problem. Maybe self-learning is the answer, maybe not. In some cases, the data might change so drastically that neither self-learning or retraining are good solutions, because an entirely new model might be needed.

11. Will the ML builder allow us to update the learning parameters needed for training the model, e.g., the learning rate?

With ML Builder, we aim to reduce the need for users to do formal courses in AI ou ML, as we simplify how to use them at the same time we enable its access to all of our users. The expansion of those capabilities is something we might consider developing and including in future versions as we gather more feedback from users.

12. What about Python libraries? Spacy, Keras, etc?

We are leveraging auto-ML, so we are not focusing on having specific capabilities to configure the ML models. However, that does not mean that we do not support the latest technologies. For example, we have included BERT in text processing should a user decide to configure GPUs for their training machines.

Looking Ahead

Is there any other question you question would you like us to answer? If your question is “What is Machine Learning Builder?” we’ve got you covered here.