Deploying on GCP
Manual deployment is not recommended, and this guide is not actively maintained. See the DSS CI/CD guide for the most up-to-date deployment steps.
The Econia DSS is portable infrastructure that can be run locally or on cloud compute.
This guide will show you how to run the DSS on Google Gloud Platform (GCP), assuming you have admin privileges.
See the gcloud
CLI reference for more information on the commands used in this walkthrough.
Initial setup
Follow the steps in this section in order, making sure to keep the relevant shell variables stored in your active shell session.
Use a scratchpad text file to store shell variable assignment statements that you can copy-paste into your shell:
ORGANIZATION_ID=123456789012
BILLING_ACCOUNT_ID=ABCDEF-GHIJKL-MNOPQR
REGION=a-region
ZONE=a-zone
Configure project
Create a GCP organization, try GCP for free, or otherwise get access to GCP.
List the organizations that you are a member of:
gcloud organizations list
Store your preferred organization ID in a shell variable:
ORGANIZATION_ID=<YOUR_ORGANIZATION_ID>
echo $ORGANIZATION_ID
Choose a project ID (like
fast-15
) that complies with the GCP project ID rules and store it in a shell variable:PROJECT_ID=<YOUR_PROJECT_ID>
echo $PROJECT_ID
Create a new project with the name
econia-dss
:gcloud projects create $PROJECT_ID \
--name econia-dss \
--organization $ORGANIZATION_IDList your billing account ID:
gcloud alpha billing accounts list
tipAs of the time of this writing, some billing commands were still in alpha release.
If you prefer a stable command release, you might not need to use the
alpha
keyword.Store the billing account ID in a shell variable:
BILLING_ACCOUNT_ID=<YOUR_BILLING_ACCOUNT_ID>
echo $BILLING_ACCOUNT_ID
Link the billing account to the project:
gcloud alpha billing projects link $PROJECT_ID \
--billing-account $BILLING_ACCOUNT_IDSet the project as default:
gcloud config set project $PROJECT_ID
Grant project permissions
Download the project IAM policy:
gcloud projects get-iam-policy $PROJECT_ID > policy.yaml
In
policy.yaml
add the email address of a user in your Google Workspace under themember
binding withroles/owner
.Set the IAM policy:
gcloud projects set-iam-policy $PROJECT_ID policy.yaml
Instruct the user to install the Google Cloud CLI and set the project ID as default before continuing:
PROJECT_ID=<PROJECT_ID>
echo $PROJECT_ID
gcloud config set project $PROJECT_ID
Configure locations
List available build regions:
gcloud artifacts locations list
Pick a preferred region and store it in a shell variable:
REGION=<PREFERRED_REGION>
List available deployment zones:
gcloud compute zones list
Pick a preferred zone and store it in a shell variable:
ZONE=<PREFERRED_ZONE>
Store values as defaults:
echo $REGION
echo $ZONEgcloud config set artifacts/location $REGION
gcloud config set compute/zone $ZONE
gcloud config set run/region $REGION
Build images
Create a GCP Artifact Registry Docker repository named
images
:gcloud artifacts repositories create images \
--repository-format dockerSet the repository as default:
gcloud config set artifacts/repository images
Clone the Econia repository:
git clone https://github.com/econia-labs/econia.git
Build the DSS images from source:
gcloud builds submit econia \
--config econia/src/docker/gcp-tutorial-config.yaml \
--substitutions _REGION=$REGIONtipThis will take a while, since it involves the compilation of several binaries from source.
Create bootstrapper
Create a GCP Compute Engine instance for bootstrapping config files, with two attached persistent disks:
gcloud compute instances create bootstrapper \
--create-disk "$(printf '%s' \
auto-delete=no,\
name=postgres-disk,\
size=100GB\
)" \
--create-disk "$(printf '%s' \
auto-delete=no,\
name=processor-disk,\
size=1GB\
)"Create an SSH key pair and use it to upload PostgreSQL configuration files to the bootstrapper:
mkdir ssh
ssh-keygen -t rsa -f ssh/gcp -C bootstrapper -b 2048 -q -N ""
gcloud compute scp \
econia/src/docker/database/configs/pg_hba.conf \
econia/src/docker/database/configs/postgresql.conf \
bootstrapper:~ \
--ssh-key-file ssh/gcpConnect to the bootstrapper instance:
gcloud compute ssh bootstrapper --ssh-key-file ssh/gcp
sudo lsblk
tipThe device name for the
postgres
disk will probably besbd
, and the device name for theprocessor
will probably besdc
(check the disk sizes if you are unsure).Store the device names in shell variables:
POSTGRES_DISK_DEVICE_NAME=<PROBABLY_sdb>
PROCESSOR_DISK_DEVICE_NAME=<PROBABLY_sdc>echo "PostgreSQL disk device name: $POSTGRES_DISK_DEVICE_NAME"
echo "Processor disk device name: $PROCESSOR_DISK_DEVICE_NAME"Format and mount the disks with read/write permissions:
sudo mkfs.ext4 \
-m 0 \
-E lazy_itable_init=0,lazy_journal_init=0,discard \
/dev/$POSTGRES_DISK_DEVICE_NAMEsudo mkfs.ext4 \
-m 0 \
-E lazy_itable_init=0,lazy_journal_init=0,discard \
/dev/$PROCESSOR_DISK_DEVICE_NAMEsudo mkdir -p /mnt/disks/postgres
sudo mkdir -p /mnt/disks/processorsudo mount -o \
discard,defaults \
/dev/$POSTGRES_DISK_DEVICE_NAME \
/mnt/disks/postgressudo mount -o \
discard,defaults \
/dev/$PROCESSOR_DISK_DEVICE_NAME \
/mnt/disks/processorsudo chmod a+w /mnt/disks/postgres
sudo chmod a+w /mnt/disks/processorCreate a PostgreSQL data directory and move the config files into it:
mkdir /mnt/disks/postgres/data
mv pg_hba.conf /mnt/disks/postgres/data/pg_hba.conf
mv postgresql.conf /mnt/disks/postgres/data/postgresql.confEnd the connection with the bootstrapper:
exit
Detach
postgres-disk
from the bootstrapper:gcloud compute instances detach-disk bootstrapper --disk postgres-disk
Deploy database
Create an administrator username and password and store them in shell variables:
ADMIN_NAME=<YOUR_ADMIN_NAME>
ADMIN_PASSWORD=<YOUR_ADMIN_PW>echo "Admin name: $ADMIN_NAME"
echo "Admin password: $ADMIN_PASSWORD"Deploy the
postgres
image as a Compute Engine Container with thepostgres
disk as a data volume:gcloud compute instances create-with-container postgres \
--container-env "$(printf '%s' \
POSTGRES_USER=$ADMIN_NAME,\
POSTGRES_PASSWORD=$ADMIN_PASSWORD\
)" \
--container-image \
$REGION-docker.pkg.dev/$PROJECT_ID/images/postgres \
--container-mount-disk "$(printf '%s' \
mount-path=/var/lib/postgresql,\
name=postgres-disk\
)" \
--disk "$(printf '%s' \
auto-delete=no,\
device-name=postgres-disk,\
name=postgres-disk\
)"Store the instance's internal and external IP addresses as well your public IP address in shell variables:
POSTGRES_EXTERNAL_IP=$(gcloud compute instances list \
--filter name=postgres \
--format "value(networkInterfaces[0].accessConfigs[0].natIP)" \
)
POSTGRES_INTERNAL_IP=$(gcloud compute instances list \
--filter name=postgres \
--format "value(networkInterfaces[0].networkIP)" \
)
MY_IP=$(curl --silent http://checkip.amazonaws.com)
echo "\n\nPostgreSQL external IP: $POSTGRES_EXTERNAL_IP"
echo "PostgreSQL internal IP: $POSTGRES_INTERNAL_IP"
echo "Your IP: $MY_IP"Promote the instance's external and internal addresses from ephemeral to static:
gcloud compute addresses create postgres-external \
--addresses $POSTGRES_EXTERNAL_IP \
--region $REGIONgcloud compute addresses create postgres-internal \
--addresses $POSTGRES_INTERNAL_IP \
--region $REGION \
--subnet defaultAllow incoming traffic on port 5432 from your IP address:
gcloud compute firewall-rules create pg-admin \
--allow tcp:5432 \
--direction INGRESS \
--source-ranges $MY_IPStore the PostgreSQL public connection string as an environment variable:
export DATABASE_URL="$(printf '%s' postgres://\
$ADMIN_NAME:\
$ADMIN_PASSWORD@\
$POSTGRES_EXTERNAL_IP:5432/econia
)"
echo $DATABASE_URLInstall
diesel
if you don't already have it, then check that the database has an empty schema:diesel print-schema
tipYou might not be able to connect to the database until a minute or so after you've first created the instance.
Run the database migrations then check the schema again:
cd econia/src/rust/dbv2
diesel migration run
diesel print-schema
cd ../../../..
Deploy REST API
Create a connector for your project's
default
Virtual Private Cloud (VPC) network:gcloud compute networks vpc-access connectors create \
postgrest \
--range 10.8.0.0/28 \
--region $REGIONVerify that the connector is ready:
STATE=$(gcloud compute networks vpc-access connectors describe \
postgrest \
--region $REGION \
--format "value(state)"
)
echo "Connector state is: $STATE"Construct the PosgREST connection URL to connect to the
postgres
instance:DB_URL_PRIVATE="$(printf '%s' postgres://\
$ADMIN_NAME:\
$ADMIN_PASSWORD@\
$POSTGRES_INTERNAL_IP:5432/econia
)"
echo $DB_URL_PRIVATEDetermine a max number of rows per PostgREST query:
PGRST_DB_MAX_ROWS=<MAX_ROWS_FOR_FETCH>
echo $PGRST_DB_MAX_ROWS
Deploy PostgREST on GCP Cloud Run with public access:
gcloud run deploy postgrest \
--allow-unauthenticated \
--image \
$REGION-docker.pkg.dev/$PROJECT_ID/images/postgrest \
--port 3000 \
--set-env-vars "$(printf '%s' \
PGRST_DB_ANON_ROLE=web_anon,\
PGRST_DB_SCHEMA=api,\
PGRST_DB_URI=$DB_URL_PRIVATE,\
PGRST_DB_MAX_ROWS=$PGRST_DB_MAX_ROWS\
)" \
--vpc-connector postgrestStore the service URL in a shell variable:
export REST_URL=$(
gcloud run services describe postgrest \
--format "value(status.url)"
)
echo $REST_URLVerify that you can query the PostgREST API from the public URL:
curl $REST_URL
Deploy processor
Create a config at
econia/src/docker/processor/config.yaml
per the general DSS guidelies:tipFor
postgres_connection_string
use the same one that thepostgrest
service uses:echo $DB_URL_PRIVATE
Upload the processor config to the bootstrapper:
gcloud compute scp \
econia/src/docker/processor/config.yaml \
bootstrapper:~ \
--ssh-key-file ssh/gcpConnect to the bootstrapper:
gcloud compute ssh bootstrapper --ssh-key-file ssh/gcp
Create a processor data directory and move the config file into it:
mkdir /mnt/disks/processor/data
mv config.yaml /mnt/disks/processor/data/config.yamlEnd the connection with the bootstrapper:
exit
Stop the bootstrapper:
gcloud compute instances stop bootstrapper
Detach
processor-disk
from the bootstrapper:gcloud compute instances detach-disk bootstrapper --disk processor-disk
Deploy the
processor
image:gcloud compute instances create-with-container processor \
--container-env HEALTHCHECK_BEFORE_START=false \
--container-image \
$REGION-docker.pkg.dev/$PROJECT_ID/images/processor \
--container-mount-disk "$(printf '%s' \
mount-path=/config,\
name=processor-disk\
)" \
--disk "$(printf '%s' \
auto-delete=no,\
device-name=processor-disk,\
name=processor-disk\
)"Give the processor a minute or so to start up, then view the container logs:
PROCESSOR_ID=$(gcloud compute instances describe processor \
--zone $ZONE \
--format="value(id)"
)
gcloud logging read "resource.type=gce_instance AND \
logName=projects/$PROJECT_ID/logs/cos_containers AND \
resource.labels.instance_id=$PROCESSOR_ID" \
--limit 5Once the processor has had enough time to sync, check some of the events from one of the REST endpoints:
curl $REST_URL/<AN_ENDPOINT>
tipFor immediate results (but with missed events and a corrupted database) during testing, use a testnet config with the following:
econia_address: 0xc0de11113b427d35ece1d8991865a941c0578b0f349acabbe9753863c24109ff
starting_version: 683453241
Then try
curl $REST_URL/balance_updates
, since this starting version immediately precedes a series of balance update operations on tesnet.
Deploy aggregator
Deploy an
aggregator
instance using the private connection string:echo $DB_URL_PRIVATE
gcloud compute instances create-with-container aggregator \
--container-env DATABASE_URL=$DB_URL_PRIVATE \
--container-image \
$REGION-docker.pkg.dev/$PROJECT_ID/images/aggregatorWait a minute or two then check logs:
AGGREGATOR_ID=$(gcloud compute instances describe aggregator \
--zone $ZONE \
--format="value(id)"
)
gcloud logging read "resource.type=gce_instance AND \
logName=projects/$PROJECT_ID/logs/cos_containers AND \
resource.labels.instance_id=$AGGREGATOR_ID" \
--limit 5Once the aggregator has had enough time to aggregate events, check some aggregated data. For example on testnet:
echo $REST_URL
curl "$(printf '%s' \
"$REST_URL/"\
"limit_orders?"\
"order=price.desc,"\
"last_increase_stamp.asc&"\
"market_id=eq.3&"\
"side=eq.ask&"\
"order_status=eq.closed&"\
"limit=3"\
)"
Deploy WebSockets API
Create a connector:
gcloud compute networks vpc-access connectors create \
websockets \
--range 10.64.0.0/28 \
--region $REGIONVerify connector readiness:
STATE=$(gcloud compute networks vpc-access connectors describe \
websockets \
--region $REGION \
--format "value(state)"
)
echo "Connector state is: $STATE"Construct WebSockets connection string:
PGWS_DB_URI="$(printf '%s' postgres://\
$ADMIN_NAME:\
$ADMIN_PASSWORD@\
$POSTGRES_INTERNAL_IP/econia
)"
echo $PGWS_DB_URIDeploy the
websockets
service:gcloud run deploy websockets \
--allow-unauthenticated \
--image \
$REGION-docker.pkg.dev/$PROJECT_ID/images/websockets \
--port 3000 \
--set-env-vars "$(printf '%s' \
PGWS_DB_URI=$PGWS_DB_URI,\
PGWS_JWT_SECRET=econia_0000000000000000000000000,\
PGWS_CHECK_LISTENER_INTERVAL=1000,\
PGWS_LISTEN_CHANNEL=econiaws\
)" \
--vpc-connector websocketsStore service URL:
WS_HTTPS_URL=$(
gcloud run services describe websockets \
--format "value(status.url)"
)
export WS_URL=$(echo $WS_HTTPS_URL | sed 's/https/wss/')
echo $WS_URLMonitor events using the WebSockets listening script:
echo $WS_URL
echo $REST_URL
echo $WS_CHANNELcd econia/src/python/sdk
poetry install
poetry run event# To quit
<Ctrl+C>cd ../../../..
Redeployment
Once you have the DSS running you might want to redeploy within the same GCP project, for example using a different chain or with new image binaries.
Whenever you redeploy, follow the below steps in order so that you do not break startup dependencies or generate any corrupted data:
Delete images in the existing
images
registry:echo $REGION
echo $PROJECT_IDgcloud artifacts docker images delete \
$REGION-docker.pkg.dev/$PROJECT_ID/images/aggregatorgcloud artifacts docker images delete \
$REGION-docker.pkg.dev/$PROJECT_ID/images/postgresgcloud artifacts docker images delete \
$REGION-docker.pkg.dev/$PROJECT_ID/images/postgrestgcloud artifacts docker images delete \
$REGION-docker.pkg.dev/$PROJECT_ID/images/processorgcloud artifacts docker images delete \
$REGION-docker.pkg.dev/$PROJECT_ID/images/websocketsgcloud artifacts docker images list
tipYou only need to delete images that you wish to redeploy newer versions of. For images that you are sure haven't changed, you can comment them out of the build file in the next step.
Rebuild images in the existing
images
registry.Delete
postgrest
andwebsockets
services:gcloud run services delete postgrest --quiet
gcloud run services delete websockets --quiettipWhen these are redeployed, they will have the same endpoint URL as before.
Delete
aggregator
andprocessor
instances:gcloud compute instances delete aggregator --quiet
gcloud compute instances delete processor --quietClear all container images from
postgres
:gcloud compute ssh postgres \
--command "$(printf '%s' \
"docker ps -aq | xargs docker stop | xargs docker rm && "\
"docker image prune -af"\
)" \
--ssh-key-file ssh/gcp \
--verbosity=debugtipYou'll need to create more SSH keys if you deleted the ones you were previously using.
noteUnlike the
aggregator
andprocessor
instances,postgres
has static IP addresses, so it is updated with a new container, unlike the other instances which are deleted then recreated.Update
postgres
container and restart:echo $ADMIN_NAME
echo $ADMIN_PASSWORDgcloud compute instances update-container postgres \
--container-env "$(printf '%s' \
POSTGRES_USER=$ADMIN_NAME,\
POSTGRES_PASSWORD=$ADMIN_PASSWORD\
)" \
--container-image \
$REGION-docker.pkg.dev/$PROJECT_ID/images/postgres \
--container-mount-disk "$(printf '%s' \
mount-path=/var/lib/postgresql,\
name=postgres-disk\
)"Reset database:
POSTGRES_EXTERNAL_IP=$(gcloud compute instances list \
--filter name=postgres \
--format "value(networkInterfaces[0].accessConfigs[0].natIP)" \
)
export DATABASE_URL="$(printf '%s' postgres://\
$ADMIN_NAME:\
$ADMIN_PASSWORD@\
$POSTGRES_EXTERNAL_IP:5432/econia
)"
echo $DATABASE_URLcd econia/src/rust/dbv2
tipGive the instance a minute or so to start up before trying to connect.
diesel database reset
cd ../../../..
Get the private connection string:
POSTGRES_INTERNAL_IP=$(gcloud compute instances list \
--filter name=postgres \
--format "value(networkInterfaces[0].networkIP)" \
)
DB_URL_PRIVATE="$(printf '%s' postgres://\
$ADMIN_NAME:\
$ADMIN_PASSWORD@\
$POSTGRES_INTERNAL_IP:5432/econia
)"
echo $DB_URL_PRIVATEUpdate your local processor config at
econia/src/docker/processor/config.yaml
withDB_URL_PRIVATE
forpostgres_connection_string
.Start the bootstrapper:
gcloud compute instances start bootstrapper
Upload the config:
gcloud compute scp \
econia/src/docker/processor/config.yaml \
bootstrapper:~ \
--ssh-key-file ssh/gcptipIt may take a bit for the bootstrapper to start up.
Attach the config disk to the bootstrapper:
gcloud compute instances attach-disk bootstrapper --disk processor-disk
Connect to the bootstrapper:
gcloud compute ssh bootstrapper --ssh-key-file ssh/gcp
Mount the disk:
sudo lsblk
PROCESSOR_DISK_DEVICE_NAME=<NEW_NAME>
echo $PROCESSOR_DISK_DEVICE_NAME
sudo mount -o \
discard,defaults \
/dev/$PROCESSOR_DISK_DEVICE_NAME \
/mnt/disks/processor
sudo chmod a+w /mnt/disks/processortipSee the bootstrapper creation process for a recapitulation of this process.
Replace the old config:
mv config.yaml /mnt/disks/processor/data/config.yaml
echo "New config:"
cat /mnt/disks/processor/data/config.yaml
echoDisconnect from the bootstrapper:
exit
Stop the bootstrapper:
gcloud compute instances stop bootstrapper
Detach the
processor-disk
from the bootstrapper:gcloud compute instances detach-disk bootstrapper --disk processor-disk
Redeploy
processor
using thegcloud compute instances create-with-container
command from initial deployment.Redeploy
postgrest
using thegcloud run deploy
command from initial deployment, after setting a max number of rows:PGRST_DB_MAX_ROWS=<MAX_ROWS_FOR_FETCH>
echo $PGRST_DB_MAX_ROWS
Redeploy
websockets
using thegcloud run deploy
command from initial deployment, after reconstructing the WebSockets connection string:PGWS_DB_URI="$(printf '%s' postgres://\
$ADMIN_NAME:\
$ADMIN_PASSWORD@\
$POSTGRES_INTERNAL_IP/econia
)"
echo $PGWS_DB_URI
Diagnostics
Check instance container status
Connect to an instance:
gcloud compute ssh <INSTANCE_NAME> --ssh-key-file <SSH_KEY_FILE>
Check Docker status:
docker ps
tipIf your container restarts every minute or so, you've got a problem.
Exit instance connection:
exit
Check instance container logs
Set instance name and number of logs to pull:
INSTANCE_NAME=<INSTANCE_NAME>
N_LOGS=<HOW_MANY_LOGS>echo $PROJECT_ID
echo $INSTANCE_NAME
echo $N_LOGSGet instance ID:
INSTANCE_ID=$(gcloud compute instances describe $INSTANCE_NAME \
--zone $ZONE \
--format="value(id)"
)
echo $INSTANCE_IDPull the logs:
gcloud logging read "resource.type=gce_instance AND \
logName=projects/$PROJECT_ID/logs/cos_containers AND \
resource.labels.instance_id=$INSTANCE_ID" \
--limit $N_LOGS