Go To: VPC networks > VPC networks > Create VPC network. (link)
Enter the following values:
Name: care-vpc
Maximum Transmission unit (MTU): 1460
VPC network ULA internal IPv6 range: Disabled
Subnet creation mode: Custom
Create a new subnet using following values:
Name: cluster-snet
Region: asia-south1
P stack type: IPv4 (single-stack)
IPv4 range: 10.0.0.0/16
Private Google Access: On
Flow logs: Off
Firewall rules: Leave default
Dynamic routing mode: Regional
Go To: VPC Networks > IP Addresses > RESERVE EXTERNAL STATIC IP ADDRESS (link)
Enter the following values:
Name: pip-care
Network Service Tier: Premium
IP version: IPv4
Type: Regional
Region: asia-south1 (Mumbai)
Attached to: None
Note down the IP address
Create 2 databases, care-db and metabase-db
SQL > Create instance > PostgreSQL (link)
Create instance for care:
Instance ID: care-db
Password: <use your own strong password>
Database version: PostgreSQL 14
Cloud SQL edition: Enterprise
Region: asia-south1
Zonal availability: Single zone
Click show zones
Primary zone: asia-south1-a
Click show configurations
Machine configuration: Dedicated core (2 vCPU, 8 GB)
Storage type: SSD
Storage capacity: 20 GB
Enable automatic storage increases: Enabled
Under connections, set
Instance IP assignment: Private IP
Associated networking: care-vpc
If not presented with setup connection dialog, skip to p
Select SET UP CONNECTION for setting up a private service connection and Click ENABLE API
Select Use an automatically allocated IP range
Click create connection
Public IP: Disabled
Under Data protection
Automated backups: Enabled
automated backup window: 2:30 AM - 6:30 AM
Enable point-in-time recovery: Enabled
Days of logs: 7
Enable deletion protection: Enabled
Maintenance window: Sunday
Once the db instance is initialized, create a new database:
Click on care-db > Databases > create database
Enter the db name care
Click create
Repeate the above steps for metabase-db with the following changes
Instance name: metabase-db
Machine configuration: Dedicated core (1 vCPU, 3.75 GB)
db name: metabase
GoTo: Cloud Storage > buckets > create (link)
Create a publicly accessible bucket for facility images:
Name: <prefix>-care-facility
Location type: Region
Location: asia-south1 (Mumbai)
Default storage class: Standard
Public access prevention: Off
Access control: Uniform
Protection tools: None
Create a private bucket for patient data.
Name: <prefix>-care-patient-data
Location type: Region
Location: asia-south1 (Mumbai)
Default storage class: Standard
Public access prevention: On
Access control: Uniform
Protection tools: Retention policy: 7 days
Go to Cloud Storage > Settings > Interoperability (link)
Under Access keys for service accounts, click on Create a key for a service account
Click create a new service account:
Name: care-bucket-access
Click “Create and continue”
Role: Storage Object Admin under Cloud Storage
Click "Continue" then "Done"
Select care-bucket-access and click on create key
Note down the Access key and Secret for later
Activate Cloud Shell
Create a file bucket-config.json with the following contents
Replace the origin with your deployed frontend URLs
Apply config for buckets using gcloud cli
Go To: Kubernetes Engine > clusters > create > standard (link)
Cluster basics:
Name: care-gke
Location type: Zonal
Zone: asia-south1-a
Node pools > default pool
Number of nodes: 2
Node pools > default pool > nodes
Machine configuration: General purpose
Series: E2
Machine type: e2-standard-2 (2 vCPU, 8 GB memory)
Node pools > default pool > networking: Network tags: care-gke
Node pools > Cluster > Networking
Network: care-vpc
Node subnet: cluster-snet
Network access: Public cluster
Enable HTTP load balancing: Enabled
Once the cloud resources are created, we can deploy our applications as Kubernetes workloads. The necessary YAML files can be found as a template in the link below.
Template repo: https://github.com/coronasafe/infra_template
Using the template, replace all generic/example values to production values. Let’s go through each folder.
Replace the example hostnames for ‘dnsNames’ with actual hostnames
In care-configmap.yaml, add database configurations and update the hostnames in CSRF_TRUSTED_ORIGINS and DJANGO_ALLOWED_HOSTS
In nginx.yaml, update the server_name with hostnames.
Install Helm[Ref]
use the static IP created from "Reserve a static IP address" step to replace the IP value in helm/scripts.sh
Replace example hostnames with actual hostnames
Update care-secrets.yml
Update metabase.yml with metabase db credentials.
Set the default gke cluster
Get the name using: kubectl config get-contexts
Set the config: kubectl config use-context <name>
Run the helm script: bash helm/scripts.sh
Use kubectl to apply all the kubernetes yaml files in the following order
Deploy configmaps: kubectl apply -f 'configmaps/*'
Secrets: kubectl apply -f 'secrets/*'
Deployments: kubectl apply -f 'deployments/*'
Services: kubectl apply -f 'services/*'
Clusterissuer: kubectl apply -f ClusterIssuer/cluster-issuer.yaml
Certificate: kubectl apply -f certificate/certificate.yml
Ingress: kubectl apply -f ingress/care.yaml
Once ingress is created, kubectl get ingress care-ingress
will show the IP of the TCP load balancer.
Once the DNS records are added, the SSL will be automatically handled.
create DNS A records for each domain pointing to the static IP created from "Reserve a static IP address" step
Once the instance is up and dns records are active, visit the metabase url, you would be presented with a new user form where you can create a new admin account