Instant Scalability with MongooseIM and CETS

The main feature of the recently released MongooseIM 6.2.1 is the improved CETS in-memory storage backend which makes it much easier to scale up.

It is difficult to predict how much traffic your XMPP server will need to handle. Are you going to have thousands or millions of connected users? Will you need to deliver hundreds of millions of messages per minute? Answering such questions is almost impossible if you are just starting up. This is why MongooseIM offers several means of scalability.

Clustering

Even one machine running MongooseIM can handle millions of connected users, provided that it is powerful enough. However, one machine is not recommended for fault tolerance reasons, because every time it needs to be shut down for maintenance, upgrade or because of any issues, your service would experience downtime. This is why we recommend using a cluster of connected MongooseIM nodes, which communicate efficiently over the Erlang Distribution protocol. Having at least three nodes in the cluster allows you to perform a rolling upgrade, where each node is stopped, upgraded, and then restarted before moving to the next node, maintaining fault tolerance and eliminating unnecessary downtime. During such an upgrade procedure, you can increase the hardware capabilities of each node, scaling the system vertically. Horizontal scaling is even easier because you only need to add new nodes to the already deployed cluster.

Mnesia

Mnesia is a built-in Erlang Database that allows sharing both persistent and in-memory data between clustered nodes. It is a great option at the start of your journey because it resides on the local disk of each cluster node and does not need to be started as a separate component. However, when you are heading towards a real application, a few issues with Mnesia become apparent:

  1. Consistency issues tend to show up quite frequently when scaling or upgrading the cluster. Resolving them requires Erlang knowledge, which should not be the case for a general-purpose XMPP server.
  2. A persistent volume for schema and data is required on each node. Such volumes can be difficult and costly to maintain, seriously impacting the possibilities for automatic scaling.
  3. Unlike relational databases, Mnesia is not designed for storing huge amounts of data, which can lead to performance issues.

After trying to mitigate such issues for a couple of years, we have concluded that it is best not to use Mnesia at all. First and foremost, it is highly recommended not to store any persistent data in Mnesia, and MongooseIM can be configured to store such data in a relational database instead. However, up to version 6.1.0, MongooseIM would still need Mnesia to store in-memory data. For example, a shared table of user sessions is necessary for message routing between users connected to different cluster nodes. The problem is that even without persistent tables, Mnesia still needs to keep its schema on disk, and the first two issues listed above would not be eliminated.

Introducing CETS

Introduced in version 6.2.0 and further refined in version 6.2.1, CETS (Cluster ETS) is a lightweight replication layer for in-memory ETS tables that requires no persistent data. Instead, it relies on a discovery mechanism to connect and synchronise with other cluster nodes. When starting up, each node registers itself in a relational database, which you should use anyway to store all your persistent data. Getting rid of Mnesia removes the last obstacle on your way to easy and simple management of MongooseIM. For example, if you are using Kubernetes, MongooseIM no longer requires any persistent volume claims (PVC’s), which could be costly, can get out of sync, and require additional management. Furthermore, with CETS you can easily set up automatic scaling of your installation.

Installing with Helm

As an example, let’s quickly set up a cluster of three MongooseIM nodes. You will need to have Helm and Kubernetes installed. The examples were tested with Docker Desktop, but they should work with any Kubernetes setup. As the first step, let’s install and initialise a PostgreSQL database with Helm:

$ curl -O https://raw.githubusercontent.com/esl/MongooseIM/6.2.1/priv/pg.sql
$ helm install db oci://registry-1.docker.io/bitnamicharts/postgresql \
   --set auth.database=mongooseim --set auth.username=mongooseim --set auth.password=mongooseim_secret \
   --set-file 'primary.initdb.scripts.pg\.sql'=pg.sql

It is useful to monitor all Kubernetes resources in another shell window:

$ watch kubectl get pod,sts,pvc,pv,svc,hpa

As soon as pod/db-postgresql-0 is shown as ready, you can check that the DB is running:

$ kubectl exec -it db-postgresql-0 -- \
  env PGPASSWORD=mongooseim_secret psql -U mongooseim -c 'SELECT * from users'

As a result, you should get an empty list of MongooseIM users. Next, let’s create a three-node MongooseIM cluster using the Helm Chart:

$ helm repo add mongoose https://esl.github.io/MongooseHelm/
$ helm install mim mongoose/mongooseim --set replicaCount=3 --set volatileDatabase=cets \
   --set persistentDatabase=rdbms --set rdbms.tls.required=false --set rdbms.host=db-postgresql \
   --set resources.requests.cpu=200m

By setting persistentDatabase to RDBMS and volatileDatabase to CETS, we are eliminating the need for Mnesia, so no PVC’s are created. To connect to PostgreSQL, we specify db-postgresql as the database host. The requested CPU resources are 0.2 of a core per pod, and they will be useful for autoscaling. You can monitor the shell window, where watch kubectl … is running, to make sure that all MongooseIM nodes are ready. It is useful to verify logs as well, e.g. kubectl logs mongooseim-0 should display logs from the first node. To see how easy it is to scale up horizontally, let’s increase the number of MongooseIM nodes (which correspond to Kubernetes pods) from 3 to 6:

$ kubectl scale --replicas=6 sts/mongooseim

You can use kubectl logs -f mongooseim-0 to see the log messages about each newly added node of the CETS cluster. With helm upgrade, you can do rolling upgrades and scaling as well. The main difference is that the changes done with helm are permanent.

Autoscaling

Should you need automatic scaling, you can set up the Horizontal Pod Autoscaler. Please ensure that you have the Metrics Server installed. There are separate instructions to install it in Docker Desktop. We have already set the requested CPU resources to 0.2 of a core per pod, so let’s start the autoscaler now:

$ kubectl autoscale sts mongooseim --cpu-percent=50 --min=1 --max=8

It is going to keep the CPU usage at 0.1 (which is 50% of 0.2) of a core per pod. The threshold is so low to be able to easily trigger scaling up, and in any real application, it should be much higher. You should see the cluster getting scaled down until it has just one node because there is no CPU load yet. See the reported targets in the window, where you have the watch kubectl … command running. To trigger scaling up, we need to put some load on the server. We could just fire up random HTTP requests, but let’s instead use the opportunity to explore MongooseIM CLI and GraphQL API. Firstly, create a new user on the first node with the CLI:

$ kubectl exec -it mongooseim-0 -- \
  mongooseimctl account registerUser --domain localhost --username alice --password secret

Next, you can send XMPP messages in a loop with the GraphQL Client API:

$ LB_HOST=$(kubectl get svc mongooseim-lb \
  --output jsonpath='{.status.loadBalancer.ingress[0].hostname}')
$ BASIC_AUTH=$(echo -n 'alice@localhost:secret' | base64)
$ while true; \
  do curl --get -N -H "Authorization:Basic $BASIC_AUTH" \
    -H "Content-Type: application/json" --data-urlencode \
    'query=mutation {stanza {sendMessage(to: "alice@localhost", body: "Hi") {id}}}' \
    http://$LB_HOST:5561/api/graphql; \
  done

You should observe new pods being launched as the load increases. If there is not enough load, run the snippet in a few separate shell windows. Stopping the script should bring the cluster size back down.

Summary

Thanks to CETS and the Helm Chart, MongooseIM 6.2.1 can be easily installed, maintained and scaled in a cloud environment. What we have shown here are the first steps, and there is much more to explore. To learn more, you can read the documentation for MongooseIM or check out the live demo at trymongoose.im. Should you have any questions, or if you would like us to customise, configure, deploy or maintain MongooseIM for you, feel free to contact us.

Keep reading

Meet the team: Erik Schön
Meet the Team: Erik Schön thumbnail

Meet the team: Erik Schön

Meet Erik Schön, Managing Director and and Nordics Business Unit Lead at Erlang Solutions. He shares his 2025 highlights and festive traditions.

Optimising for Concurrency: Comparing and contrasting the BEAM and JVM virtual machines

Optimising for Concurrency: Comparing and contrasting the BEAM and JVM virtual machines

Attila Sragli explores the BEAM VM's inner workings, comparing them to the JVM to highlight their importance.

MongooseIM 6.3: Prometheus, CockroachDB and more

MongooseIM 6.3: Prometheus, CockroachDB and more

Pawel Chrząszcz introduces MongooseIM 6.3.0 with Prometheus monitoring and CockroachDB support for greater scalability and flexibility.