Configuration
Carrot can be configured using environment variables which can be different depending on the deploying approach.
Django Backend
Configuration Section
Key | Description |
---|---|
FRONTEND_URL * | The URL for Frontend service which |
ALLOWED_HOSTS * | A list of strings representing the host/domain names that this Django site can serve. |
DB_ENGINE * | The database backend to use. Carrot uses |
| These settings ( |
DEBUG | A boolean that turns on/off debug mode. |
WORKERS_URL * | The URL for |
WORKERS_UPLOAD_NAME * WORKERS_RULES_EXPORT_NAME * | Name of queues in Azurite that |
WORKERS_RULES_NAME | Name of point where |
WORKERS_RULES_KEY * | The key to authorise the request sent to |
STORAGE_CONN_STRING * | The key to connect Backend and Azure local storage. |
SECRET_KEY * | A secret key for a particular Django installation. This is used to provide cryptographic signing, and should be set to a unique, unpredictable value. Real value is used in Production. |
SIGNING_KEY * | A key required in JWT token generation process for Next Auth. Real value is used in Production. |
| Credentials required to create the first superuser in Carrot. Without
these variables, no superuser will be created. |
STORAGE_TYPE * | The type of storage to use and the options are |
WORKER_SERVICE_TYPE * | The type of worker service to use and the options are |
AIRFLOW_BASE_URL * AIRFLOW_AUTO_MAPPING_DAG_ID * AIRFLOW_SCAN_REPORT_PROCESSING_DAG_ID * AIRFLOW_RULES_EXPORT_DAG_ID * | These variables are required to let Carrot know which Airflow service (DAGs IDs and base URL) to use. |
AIRFLOW_ADMIN_USERNAME * AIRFLOW_ADMIN_PASSWORD * | Credentials required to access the Airflow webserver and Airflow API. |
Examples
Below is the setting example for Carrot Backend or api
service which is one part of Compose Stack used for local development.
Note: The STORAGE_TYPE
environment variable is set to azure
as default.
api:
image: carrot-backend
build:
context: app
dockerfile: api/Dockerfile
ports:
- 8000:8000
environment:
- FRONTEND_URL=http://frontend:3000
- ALLOWED_HOSTS=['localhost', '127.0.0.1','api', 'workers']
- DB_ENGINE=django.db.backends.postgresql
- DB_HOST=db
- DB_PORT=5432
- DB_NAME=postgres
- DB_USER=postgres
- DB_PASSWORD=postgres
- DEBUG=True
- WORKERS_UPLOAD_NAME=upload-reports-queue
- SECRET_KEY=secret
- WORKERS_URL=http://workers:80
- WORKERS_RULES_NAME=RulesOrchestrator
- WORKERS_RULES_KEY=rules_key
- WORKERS_RULES_EXPORT_NAME=rules-exports-queue
- STORAGE_CONN_STRING=DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite:10000/devstoreaccount1;QueueEndpoint=http://azurite:10001/devstoreaccount1;TableEndpoint=http://azurite:10002/devstoreaccount1;
- SIGNING_KEY=secret
- SUPERUSER_DEFAULT_USERNAME=admin-local
- SUPERUSER_DEFAULT_PASSWORD=admin-password
- SUPERUSER_DEFAULT_EMAIL=admin@carrot
- STORAGE_TYPE=${STORAGE_TYPE:-azure}
# MinIO Configurations (Automatically uses if STORAGE_TYPE is set to minio)
- MINIO_ENDPOINT=minio:9000
- MINIO_ACCESS_KEY=minioadmin
- MINIO_SECRET_KEY=minioadmin
volumes:
- ./app/api:/api
depends_on:
omop-lite:
condition: service_completed_successfully
This service is built based on the Dockerfile inside app/api/
folder and exposed on port 8000:8000
. It will start after the omop-lite
service ran and did its jobs successfully. When running, it also uses the mounted code in the api
folder so the changes will be reflected without restarting the stack.
Additionally, the STORAGE_TYPE
environment variable will make the carrot to create the necessary resources (Queue & Container or Buckets) automatically.
Airflow Webserver and Scheduler
Configuration Section
Key | Description |
---|---|
AIRFLOW__CORE__EXECUTOR * | The executor to use. Defaulted to LocalExecutor . |
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN * | The connection string to the a database service. Without this, Airflow will not be able to connect to the database. |
AIRFLOW__DATABASE__SQL_ALCHEMY_SCHEMA | The Airflow schema to use for the PostgreSQL database. |
AIRFLOW__CORE__LOAD_EXAMPLES | The flag to load the Airflow examples. Defaulted to false . |
AIRFLOW__WEBSERVER__WEB_SERVER_PORT * | The port to use for the Airflow webserver. Defaulted to 8080 . |
AIRFLOW__WEBSERVER__SECRET_KEY * | The secret key to use for the Airflow webserver. |
AIRFLOW__API__AUTH_BACKENDS * | The authentication backends to use for the Airflow API. Carrot connects to Airflow API to trigger DAGs through the basic auth. |
AIRFLOW_CONN_POSTGRES_DB_CONN * | The connection string to the PostgreSQL database. |
STORAGE_TYPE * | The type of storage to use and the options are |
AIRFLOW_VAR_MINIO_ENDPOINT * AIRFLOW_VAR_MINIO_ACCESS_KEY * AIRFLOW_VAR_MINIO_SECRET_KEY * | These variables are required to let Airflow know which MinIO service to use and the credentials to access it. |
AIRFLOW_ADMIN_USERNAME * AIRFLOW_ADMIN_PASSWORD * | Credentials required to access the Airflow webserver and Airflow API. |
Other services
Azure functions
Carrot optionally uses Azure functions as workers to process the tasks.
Configuration Section
Key | Description |
---|---|
AzureWebJobsSecretStorageType | Specifies the repository or provider to use for key storage. Carrot’s
workers uses this as |
IsEncrypted | This setting represents whether the values in local.settings.json are encrypted using a local machine key. Only required in local development. |
FUNCTIONS_WORKER_RUNTIME * | The language or language stack of the worker runtime to load in the function app. |
DB_ENGINE * | The database backend to use. Carrot’s workers uses |
| These settings ( |
| Name of queues in Azurite that |
WEBSITE_HOSTNAME | The address that can be used to reach the function app from outside. Only required in local development. |
STORAGE_CONN_STRING * AzureWebJobsStorage * | The keys to connect Workers and Azure storage. |
STORAGE_TYPE * | The type of storage to use and the options are |
Examples
Below is the setting example for Carrot’s Azure functions or workers
service which is one part of Compose Stack used for local development.
Note: The STORAGE_TYPE
environment variable is set to azure
as default.
workers:
image: carrot-workers
build:
context: app
dockerfile: workers/Dockerfile
ports:
- 8080:80
- 7071:80
environment:
- AzureWebJobsSecretStorageType=files
- IsEncrypted=false
- AzureWebJobsStorage=DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite:10000/devstoreaccount1;QueueEndpoint=http://azurite:10001/devstoreaccount1;TableEndpoint=http://azurite:10002/devstoreaccount1;
- FUNCTIONS_WORKER_RUNTIME=python
- STORAGE_CONN_STRING=DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite:10000/devstoreaccount1;QueueEndpoint=http://azurite:10001/devstoreaccount1;TableEndpoint=http://azurite:10002/devstoreaccount1;
- APP_URL=http://api:8000/
- WORKERS_UPLOAD_NAME=upload-reports-queue
- RULES_QUEUE_NAME=rules-queue
- RULES_FILE_QUEUE_NAME=rules-exports-queue
- WEBSITE_HOSTNAME=localhost:7071
- DB_ENGINE=django.db.backends.postgresql
- DB_HOST=db
- DB_PORT=5432
- DB_NAME=postgres
- DB_USER=postgres
- DB_PASSWORD=postgres
- STORAGE_TYPE=${STORAGE_TYPE:-azure}
# MinIO Configurations (Automatically uses if STORAGE_TYPE is set to minio)
- MINIO_ENDPOINT=minio:9000
- MINIO_ACCESS_KEY=minioadmin
- MINIO_SECRET_KEY=minioadmin
volumes:
- ./app/workers:/home/site/wwwroot
- ./app/shared:/shared
- ./app/workers/Secrets:/azure-functions-host/Secrets/
depends_on:
- api
- azurite
This service is built based on the Dockerfile inside app/workers/
folder and exposed on ports 8080:80
and 7071:80
. It will start after the api
and azurite
services are successfully up. When running, it also created some volumes that support its functionalities and authentication.
Azurite
Carrot optionally uses Azurite as queues and blobs storage for Azure functions.
If STORAGE_TYPE
is set to azure
, Carrot will automatically create the necessary resources such as Queue
& Containers
for you (see below).
Queue for Rules actions triggers, e.g., `rules-queue`
Queue for Mapping rules exports, e.g., `rules-exports-queue`
Queue for Scan reports uploads, e.g., `upload-reports-queue`
Examples
The example below runs a PostgreSQL database for Carrot on port 5432:5432
.
Additionally, it runs Azure local storage for Carrot’s workers on ports 10000:10000
, 10001:10001
, and 10002:10002
.
The command
and AZURITE_ACCOUNTS
environment variable in this example make sure the connection between azurite
and workers
proper.
After db
is up, this service runs a DDL script to create a omop
schema in the db
, then load the vocabs downloaded from Athena there.
When omop
schema existed, omop-lite
will closes automatically.
db:
image: postgres:13
restart: always
ports:
- 5432:5432
environment:
- POSTGRES_PASSWORD=postgres
omop-lite:
image: ghcr.io/andyrae/omop-lite
volumes:
- ./vocabs:/vocabs
depends_on:
- db
environment:
- DB_PASSWORD=postgres
- DB_NAME=postgres
azurite:
image: mcr.microsoft.com/azure-storage/azurite
restart: always
volumes:
- ./app/azurite:/azurite
ports:
- 10000:10000
- 10001:10001
- 10002:10002
command: azurite --blobHost azurite --queueHost azurite --tableHost azurite --location /data --debug /data/debug.log --loose --skipApiVersionCheck
hostname: azurite
environment:
- AZURITE_ACCOUNTS=devstoreaccount1:Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==
Database
Carrot uses PostgreSQL database.
The Carrot Database would require two data components at the beginning:
- An
omop
schema with loaded vocabularies - Seeding data about OMOP tables and fields names
For local development, the former can be created by omop-lite package as the example below (step 1 in developer quickstart guide) and the latter can be done by step 4 in developer quickstart guide.
MinIO
Carrot uses MinIO as a blob storage by default in local development
minio:
profiles: ["main"]
image: minio/minio
container_name: minio
restart: always
command: server /data --console-address ":9001"
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: "minioadmin" # Only applicable for local development
MINIO_ROOT_PASSWORD: "minioadmin" # Only applicable for local development
MINIO_BROWSER: "on"
MINIO_DOMAIN: "minio"
volumes:
- minio_data:/data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 10s
timeout: 5s
retries: 5
...
By default, Carrot will automatically create the necessary resources such as Buckets
and Blob Queues
.
-
BUCKETS = [
scan-reports
,data-dictionaries
,rules-exports
] -
QUEUES = [
rules-local
,rules-exports-local
,uploadreports-local
]