Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

What is the Machine Learning Service?

The Machine Learning (ML) Service provides a space for ML and artificial intelligence (AI) enrichment within the broader context of a running Totara instance.

Explaining recommendations in the two different systems: Legacy Recommender vs Machine Learning Service

The Totara Recommendations system was introduced as part of Engage in Totara 15 adopts 13. The initial implementation was called Recommenders (located in the codebase under extensions/ml_recommender). This is a cron-based system that works by building a model and generating recommendations in an overnight job, which are then imported into Totara. The Recommender communicates via a series of shell scripts as described on the Recommender installation and configuration page. While this system is relatively simple to implement, it does not scale well for large sites. Recommendations are generated every cycle rather than in real time, which means new users or new content do not have recommendations until the next cycle. 

Totara 15 adopted a different approach for the deployment of the Recommender. The Recommender for Totara 15 runs inside an independent Machine Learning (ML) Service. The ML service Recommenders engine. In Totara 15 we introduced the Machine Learning Service (located in the codebase under extensions/ml_service) as a host for the Recommenders engine. In the ML Service, the Recommenders engine runs as a subcomponent. The recommendations are generated in real time based on a model that is refreshed periodically. This brings obvious improvements to the experience for the end user, but it also allows for better system efficiency and site performance. The ML Service is a separate module that can run on a separate server host from the Totara site if required. Communication between Totara and the ML service Service is via API calls in real time. Updates to the Recommender will happen based on user activity.In Totara 13 and 14

Upgrading the

...

legacy Recommender to the Machine Learning Service

The legacy Recommender is supported in Totara 15 and will continue to be fully functional when Totara 13 or 14 is upgraded to Totara 15. Totara recommends upgrading to the new Machine Learning Engine Service as the legacy Recommender has been deprecated in Totara 17, and will be deprecated removed in the future.

The upgrade process is described below.

Upgrading the legacy Recommender to the Machine Learning Engine

Totara 19.

This step is required only if you are using the legacy Recommender on Totara 13 or 14 and want to use the Recommender with the new ML serviceService.

The legacy Recommender works via the execution of three separate scripts:

  1. Export data php server/ml/recommender/cli/export_data.php  ).
  2. Train recommender model ( eval php server/ml/recommender/cli/recommender_command.php  ).
  3. Import data back to server server/ml/recommender/cli/import_recommendations.php ).

The new Recommender Machine Learning Service does not require tasks 2 and 3. Task 1 is still required to be scheduled. If you have task 2 and 3 scheduled in cron, disable those but leave task 1 running.

...

Once the above steps are completed, install the new Machine Learning Service by following the installation steps below and the recommendations recommenders engine configurations on Totara. When the service is successfully installed, and is running and configured on Totara, the recommendations recommenders engine model will train in the ML Service and Totara will start getting recommendations from the Serviceservice.

Configuration

The URL of the ML Service and a secret key needs need to be configured on in Totara for the successful connection with the Serviceservice. This can be done from admin settings page of Totara via https://[your_domain]/server/admin/settings.php?section=machine_learning_environment or from the config.php script by adding the following lines:

Code Block
php
php
$CFG->ml_service_url = 'http://mlservice:5000'; // The URL of the ML Service
$CFG->ml_service_key = 'authenticationkey';

Installation and running guide

The Machine Learning Service can be installed and run in : a Docker container, Linux (with supervisor or without), or Windows. The installation and configuration instructions on different platforms are included in integration/ extensions/ml_service/README.md.

After the ML Service is successfully installed, the data export script must be scheduled to run via cron:

...

The ML Service will fetch the data cached by this task on regular intervals. The default frequency on with which the ML Service fetches the data and upgrades the recommendation model is once every 24 hours. This can be modified by following the instructions in the file: integration/extensions/ml_service/README.md

Note that the ML service needs Service requires two directories; : one for ML models and one for logs. These directories can be configured for the ML Service with the environment variables ML_MODELS_DIR and ML_LOGS_DIR on the hosting platform. Users should make sure that the service has write access to these directories. The service is designed to run with all micro versions of the following minor versions of Python 3:

  • Python 3.6 (for Totara 16 and under)
  • Python 3.7
  • Python 3.8
  • Python 3.9

The ML Service can be installed and run on the same host machine as Totara (in a Docker or without), or a different Windows or Linux machine.

When configuring the ML service Service from scratch for Totara 15 onwards, the following scheduled tasks should be disabled:

  • Export user data for recommendation processing (\ml_recommender\tast\export)
  • Import user recommendations (\ml_recommender\task\import)

They These should be set up as shown below:

...

Note

Note for Windows users: The Recommender Recommenders engine in the ML Service uses a library called LightFM for modelling the recommendations. The LightFM library needs to be compiled with an OpenMP-enabled C compiler for multi-threading. As this is hard to set up on Windows, all model fitting will be single-threaded. If you’d like to use the multi-threading capabilities of LightFM on these platforms, you should try using ML Service via Docker.


© Copyright 2021 2022 Totara Learning Solutions. All rights reserved.