dar-alatta-prod / README.md

Dar Al Atta Production

Production config for Dar Al Atta project

Last updated: 4/16/2026GitHub

Dar Al Atta Production

This guide contains instructions for setting up a production environment for Dar Al Atta mobile app. The guide is divided into 2 sections:

  1. Deploying the mobile app to stores: App Store and Google Play
  2. Deploying the backend to production servers

NOTE: All passwords are stored in this file. The passwords are referred to as $<password>, e.g. $API_VM_PASSWORD. To see the actual value, check the passwords file.

1. Deploying mobile app

1.1. Building the app

To build the app, do the following:

  1. On every build, make sure to increment these fields in app.json file in the client directory:
    • version
    • android.versionCode
    • ios.buildNumber
  2. Follow this guide to create a local iOS production build
  3. Follow this guide to create a local android production build

1.2. Submitting to the app stores

  1. For App Store, the build is submitted and distributed from Xcode. After that you have to login to App Store Connect to submit it for TestFlight or production.
  2. For Google Play, the output of the build is an .aab file, which you have to upload and submit for review in the Google Play Console.

2. Deploying the backend

2.1. Services and VMs overview

The backend contains the following services:

  1. API, writting in Go
  2. Strapi, as an admin dashboard
  3. Postgres, which contains api and strapi databases
  4. Metabase, for data analytics

And we have 3 VMs from ODP:

  1. Postgres
    • IP: 10.8.122.54
    • User: root
    • Password: $POSTGRES_VM_PASSWORD
    • Services deployed: only Postgres
  2. API:
    • IP: 10.8.122.39
    • User: root
    • Password: $API_VM_PASSWORD
    • Services deployed: API and admin
  3. Backup:
    • IP: 10.8.122.38
    • User: root
    • Password: $BACKUP_VM_PASSWORD
    • Services deployed: currently it only contains the backup for strapi's media folder and the postgres data

2.2. Accessing the VMs

To access the VMs, you have to have install and login to Forticlient. These are the login details:

  • Remote gateway: 193.203.254.178
  • Port: 8443
  • Authentication: click on "Save login"
  • Username: mohamed.rasbi
  • Password: $FORTICLIENT_PASSWORD

Note that they will send an email that contains an OTP to mohamed.rasbi@rihal.om, so contact him to give you the OTP.

2.3. Setting up the servers

  1. The docker compose files are defined here
  2. API config and firebase credentials are stored here
  3. Strapi env vars are stored here
  4. Nginx config is stored here
  5. Metabase config is stored here

NOTE: before running docker compose in the API VM, create strapi-data and set ownership:

mkdir strapi-data
sudo chown -R 1000:1000 ./strapi-data

2.4. Notes

  1. When running the API, it connects to strapi at the startup, so if you see an error connecting to strapi, run docker compose restart api and it should work.
  2. The API VM have access via a local network to the old MSSQL database to migrate users and donations data. Check this script.
  3. We have a background job in strapi to migrate donations data from api to strapi db, however, initially we need to migrate +3 million rows from API db to strapi DB. To do that faster, check this file. Note that this is already done.
  4. Metabase is not deployed it. We can deploy it in the backup VM since it is not utilized.