Configuration

Carrot can be configured using environment variables which can be different depending on the deploying approach.

Django Backend

Configuration Section

KeyDescription
FRONTEND_URL*

The URL for Frontend service which Backend should connect to. This only needs to include scheme (e.g. http://), host (my-fronted.com, or in this example, frontend in order to connect with service in the same compose stack) and optionally port, e.g., 3000.

ALLOWED_HOSTS*

A list of strings representing the host/domain names that this Django site can serve.

DB_ENGINE*

The database backend to use. Carrot uses Postgres database backend.

DB_HOST DB_PORT DB_NAME DB_USER DB_PASSWORD*

These settings (port, host, name, user, password) are required for PostgreSQL database connection.

DEBUGA boolean that turns on/off debug mode.
STORAGE_CONN_STRING*The key to connect Backend and local storage.
SECRET_KEY*

A secret key for a particular Django installation. This is used to provide cryptographic signing, and should be set to a unique, unpredictable value. Real value is used in Production.

SIGNING_KEY*

A key required in JWT token generation process for Next Auth. Real value is used in Production.

SUPERUSER_DEFAULT_USERNAME* SUPERUSER_DEFAULT_PASSWORD* SUPERUSER_DEFAULT_EMAIL

Credentials required to create the first superuser in Carrot. Without these variables, no superuser will be created. SUPERUSER_DEFAULT_EMAIL is defaulted to user@carrot.

STORAGE_TYPE*

The type of storage to use and the options are Azure and MinIO. This variable is required to let Carrot know which storage type to use. Use azure for Azure Storage and deployment in Azure.

AIRFLOW_BASE_URL* AIRFLOW_AUTO_MAPPING_DAG_ID* AIRFLOW_SCAN_REPORT_PROCESSING_DAG_ID* AIRFLOW_RULES_EXPORT_DAG_ID*

These variables are required to let Carrot know which Airflow service (DAGs IDs and base URL) to use.

AIRFLOW_ADMIN_USERNAME* AIRFLOW_ADMIN_PASSWORD*

Credentials required to access the Airflow webserver and Airflow API.

Examples

Below is the setting example for Carrot Backend or api service which is one part of Compose Stack used for local development.

Note: The STORAGE_TYPE environment variable is set to azure as default.

api:
  image: carrot-backend
  build:
    context: app
    dockerfile: api/Dockerfile
  ports:
    - 8000:8000
  environment:
    - FRONTEND_URL=http://frontend:3000
    - ALLOWED_HOSTS=['localhost', '127.0.0.1','api', 'workers']
    - DB_ENGINE=django.db.backends.postgresql
    - DB_HOST=db
    - DB_PORT=5432
    - DB_NAME=postgres
    - DB_USER=postgres
    - DB_PASSWORD=postgres
    - DEBUG=True
    - SECRET_KEY=secret
    - STORAGE_CONN_STRING=DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite:10000/devstoreaccount1;QueueEndpoint=http://azurite:10001/devstoreaccount1;TableEndpoint=http://azurite:10002/devstoreaccount1;
    - SIGNING_KEY=secret
    - SUPERUSER_DEFAULT_USERNAME=admin-local
    - SUPERUSER_DEFAULT_PASSWORD=admin-password
    - SUPERUSER_DEFAULT_EMAIL=admin@carrot
    - STORAGE_TYPE=${STORAGE_TYPE:-azure}
    # MinIO Configurations (Automatically uses if STORAGE_TYPE is set to minio)
    - MINIO_ENDPOINT=minio:9000
    - MINIO_ACCESS_KEY=minioadmin
    - MINIO_SECRET_KEY=minioadmin
  volumes:
    - ./app/api:/api
  depends_on:
    omop-lite:
      condition: service_completed_successfully

This service is built based on the Dockerfile inside app/api/ folder and exposed on port 8000:8000. It will start after the omop-lite service ran and did its jobs successfully. When running, it also uses the mounted code in the api folder so the changes will be reflected without restarting the stack.

Additionally, the STORAGE_TYPE environment variable will make the carrot to create the necessary resources (Queue & Container or Buckets) automatically.

Airflow Webserver and Scheduler

Configuration Section

KeyDescription
AIRFLOW__CORE__EXECUTOR*The executor to use. Defaulted to LocalExecutor.
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN*

The connection string to the a database service. Without this, Airflow will not be able to connect to the database.

AIRFLOW__DATABASE__SQL_ALCHEMY_SCHEMAThe Airflow schema to use for the PostgreSQL database.
AIRFLOW__CORE__LOAD_EXAMPLESThe flag to load the Airflow examples. Defaulted to false.
AIRFLOW__WEBSERVER__WEB_SERVER_PORT*The port to use for the Airflow webserver. Defaulted to 8080.
AIRFLOW__WEBSERVER__SECRET_KEY*The secret key to use for the Airflow webserver.
AIRFLOW__API__AUTH_BACKENDS*

The authentication backends to use for the Airflow API. Carrot connects to Airflow API to trigger DAGs through the basic auth.

AIRFLOW_CONN_POSTGRES_DB_CONN*The connection string to the PostgreSQL database.
STORAGE_TYPE*

The type of storage to use and the options are Azure and MinIO. This is defaulted to minio in local development for Airflow.

AIRFLOW_VAR_MINIO_ENDPOINT* AIRFLOW_VAR_MINIO_ACCESS_KEY* AIRFLOW_VAR_MINIO_SECRET_KEY*

These variables are required to let Airflow know which MinIO service to use and the credentials to access it.

AIRFLOW_ADMIN_USERNAME* AIRFLOW_ADMIN_PASSWORD*

Credentials required to access the Airflow webserver and Airflow API.

Other services

Azurite

Carrot optionally uses Azurite as blobs storage.

If STORAGE_TYPE is set to azure, Carrot will automatically create the necessary Containers for you (see below).

Blob Containers

Container for ScanReport blobs, e.g., `scan-reports`
Container for Data dictionary blobs, e.g., `data-dictionaries`
Container for Mapping rules files, e.g., `rules-exports`

Examples

The example below runs a PostgreSQL database for Carrot on port 5432:5432. Additionally, it runs Azure local storage for Carrot’s workers on ports 10000:10000, 10001:10001, and 10002:10002.

The command and AZURITE_ACCOUNTS environment variable in this example make sure the connection between azurite and workers proper.

After db is up, this service runs a DDL script to create a omop schema in the db, then load the vocabs downloaded from Athena there.

When omop schema existed, omop-lite will closes automatically.

db:
  image: postgres:13
  restart: always
  ports:
    - 5432:5432
  environment:
    - POSTGRES_PASSWORD=postgres
 
omop-lite:
  image: ghcr.io/health-informatics-uon/omop-lite
  volumes:
    - ./vocabs:/vocabs
  depends_on:
    - db
  environment:
    - DB_PASSWORD=postgres
    - DB_NAME=postgres
 
azurite:
image: mcr.microsoft.com/azure-storage/azurite
restart: always
volumes:
  - ./app/azurite:/azurite
ports:
  - 10000:10000
  - 10001:10001
  - 10002:10002
command: azurite --blobHost azurite --queueHost azurite --tableHost azurite --location /data --debug /data/debug.log --loose --skipApiVersionCheck
hostname: azurite
environment:
  - AZURITE_ACCOUNTS=devstoreaccount1:Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==

Database

Carrot uses PostgreSQL database.

The Carrot Database would require two data components at the beginning:

  • An omop schema with loaded vocabularies
  • Seeding data about OMOP tables and fields names

For local development, the former can be created by omop-lite package as the example below (step 1 in developer quickstart guide) and the latter can be done by step 4 in developer quickstart guide.

MinIO

Carrot uses MinIO as a blob storage by default in local development

minio:
  profiles: ["main"]
  image: minio/minio
    container_name: minio
    restart: always
    command: server /data --console-address ":9001"
    ports:
      - "9000:9000"
      - "9001:9001"
    environment:
      MINIO_ROOT_USER: "minioadmin" # Only applicable for local development
      MINIO_ROOT_PASSWORD: "minioadmin" # Only applicable for local development
      MINIO_BROWSER: "on"
      MINIO_DOMAIN: "minio"
    volumes:
      - minio_data:/data
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 10s
      timeout: 5s
      retries: 5
    ...

By default, Carrot will automatically create the necessary resources such as Buckets and Blob Queues.

  • BUCKETS = [ scan-reports, data-dictionaries, rules-exports ]

  • QUEUES = [ rules-local, rules-exports-local, uploadreports-local ]