Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
An Introduction to CARE
CARE is a centralized capacity management and patient management system which is a conglomerate of Patients, doctors, hospitals, labs, specialized treatment centres, hospital administrators, shifting control cells etc. Each hospital would have to login and update the information regarding their Assets including bed capacity, health care personnels, current patient count, status etc so that it would be easy for the district administration to get a birds eye view of the entire healthcare system as well as the patients through the smart and intuitive dashboards.
The software is free and open-source available under an MIT License at https://github.com/coronasafe
Go To: VPC networks > VPC networks > Create VPC network. (link)
Enter the following values:
Name: care-vpc
Maximum Transmission unit (MTU): 1460
VPC network ULA internal IPv6 range: Disabled
Subnet creation mode: Custom
Create a new subnet using following values:
Name: cluster-snet
Region: asia-south1
P stack type: IPv4 (single-stack)
IPv4 range: 10.0.0.0/16
Private Google Access: On
Flow logs: Off
Firewall rules: Leave default
Dynamic routing mode: Regional
Go To: VPC Networks > IP Addresses > RESERVE EXTERNAL STATIC IP ADDRESS (link)
Enter the following values:
Name: pip-care
Network Service Tier: Premium
IP version: IPv4
Type: Regional
Region: asia-south1 (Mumbai)
Attached to: None
Note down the IP address
GoTo: Cloud Storage > buckets > create (link)
Create a publicly accessible bucket for facility images:
Name: <prefix>-care-facility
Location type: Region
Location: asia-south1 (Mumbai)
Default storage class: Standard
Public access prevention: Off
Access control: Uniform
Protection tools: None
Create a private bucket for patient data.
Name: <prefix>-care-patient-data
Location type: Region
Location: asia-south1 (Mumbai)
Default storage class: Standard
Public access prevention: On
Access control: Uniform
Protection tools: Retention policy: 7 days
Go to Cloud Storage > Settings > Interoperability (link)
Under Access keys for service accounts, click on Create a key for a service account
Click create a new service account:
Name: care-bucket-access
Click “Create and continue”
Role: Storage Object Admin under Cloud Storage
Click "Continue" then "Done"
Select care-bucket-access and click on create key
Note down the Access key and Secret for later
Activate Cloud Shell
Create a file bucket-config.json with the following contents
Replace the origin with your deployed frontend URLs
Apply config for buckets using gcloud cli
Once the cloud resources are created, we can deploy our applications as Kubernetes workloads. The necessary YAML files can be found as a template in the link below.
Template repo: https://github.com/coronasafe/infra_template
Using the template, replace all generic/example values to production values. Let’s go through each folder.
Replace the example hostnames for ‘dnsNames’ with actual hostnames
In care-configmap.yaml, add database configurations and update the hostnames in CSRF_TRUSTED_ORIGINS and DJANGO_ALLOWED_HOSTS
In nginx.yaml, update the server_name with hostnames.
Install Helm[Ref]
use the static IP created from "Reserve a static IP address" step to replace the IP value in helm/scripts.sh
Replace example hostnames with actual hostnames
Update care-secrets.yml
Update metabase.yml with metabase db credentials.
Set the default gke cluster
Get the name using: kubectl config get-contexts
Set the config: kubectl config use-context <name>
Run the helm script: bash helm/scripts.sh
Use kubectl to apply all the kubernetes yaml files in the following order
Deploy configmaps: kubectl apply -f 'configmaps/*'
Secrets: kubectl apply -f 'secrets/*'
Deployments: kubectl apply -f 'deployments/*'
Services: kubectl apply -f 'services/*'
Clusterissuer: kubectl apply -f ClusterIssuer/cluster-issuer.yaml
Certificate: kubectl apply -f certificate/certificate.yml
Ingress: kubectl apply -f ingress/care.yaml
Once ingress is created, kubectl get ingress care-ingress
will show the IP of the TCP load balancer.
Once the DNS records are added, the SSL will be automatically handled.
create DNS A records for each domain pointing to the static IP created from "Reserve a static IP address" step
The components required in deploying CARE to any cloud/on-prem infrastructure are listed below.
Virtual Network
Kubernetes Cluster
Postgres DB
Load balancer
Virtual Machine
Postgres DB
Load balancer
Virtual Machine as a jump server
Firewall / Network security groups
Virtual Machine for VPN
The green components are the basic requirements that we expect a cloud service provider or on-prem infrastructure to provide.
Create 2 databases, care-db and metabase-db
Create instance for care:
Instance ID: care-db
Password: <use your own strong password>
Database version: PostgreSQL 14
Cloud SQL edition: Enterprise
Region: asia-south1
Zonal availability: Single zone
Click show zones
Primary zone: asia-south1-a
Click show configurations
Machine configuration: Dedicated core (2 vCPU, 8 GB)
Storage type: SSD
Storage capacity: 20 GB
Enable automatic storage increases: Enabled
Under connections, set
Instance IP assignment: Private IP
Associated networking: care-vpc
If not presented with setup connection dialog, skip to p
Select SET UP CONNECTION for setting up a private service connection and Click ENABLE API
Select Use an automatically allocated IP range
Click create connection
Public IP: Disabled
Under Data protection
Automated backups: Enabled
automated backup window: 2:30 AM - 6:30 AM
Enable point-in-time recovery: Enabled
Days of logs: 7
Enable deletion protection: Enabled
Maintenance window: Sunday
Once the db instance is initialized, create a new database:
Click on care-db > Databases > create database
Enter the db name care
Click create
Repeate the above steps for metabase-db with the following changes
Instance name: metabase-db
Machine configuration: Dedicated core (1 vCPU, 3.75 GB)
db name: metabase
SQL > Create instance > PostgreSQL ()
Go To: Kubernetes Engine > clusters > create > standard (link)
Cluster basics:
Name: care-gke
Location type: Zonal
Zone: asia-south1-a
Node pools > default pool
Number of nodes: 2
Node pools > default pool > nodes
Machine configuration: General purpose
Series: E2
Machine type: e2-standard-2 (2 vCPU, 8 GB memory)
Node pools > default pool > networking: Network tags: care-gke
Node pools > Cluster > Networking
Network: care-vpc
Node subnet: cluster-snet
Network access: Public cluster
Enable HTTP load balancing: Enabled
Most projects on the Coronasafe Network are dockerized and can be easily deployed on various container orchestration services for scalability. This documentation contains instructions for setting up our Care Network project in a production environment on the Kubernetes service provided by Amazon (AWS EKS). The same principles can be applied to other projects under the Coronasafe Network.
For help in setting up a local version of the project for development, see Setting up the project locally
Before deployment make sure that both the care and care_fe repos from the Coronasafe Network Github Organization are forked into the required organization's Account. This step makes it easier for the projects to remain synced, The Care project will keep receiving security updates and features which can be easily synced up with the deployment fork. Any Security issue must be reported to the Coronasafe Network admin before being implemented in any forks.
Set up an SMTP server. When a Gov is deploying care, the Gov should provide email gateway credentials along with domains.
Set up a Sentry project for error management.
Create a Virtual Network.
Create a PostgreSQL Server. Create database and add PostGIS extension.
Create a S3 compatable storage (Since Azure Storage account does not support S3 API) and get credentials for the storage.
Create a private AKS cluster.
Create a VM instance as a jump server.
Create a VM instance as a bastion host, install VPN, and whitelist its IP in the jumpservers network security group.
Configure a self-hosted agent (linux) in the jumpserver for CI/CD pipelines.
Update configurations for each deployment in their corresponding YAML files.
Create the deployments using the CI/CD pipelines from Github.
Create a VM instance in the same VNET as CARE.
Create a PostgreSQL Server. Create a database for Metabase in it.
Create a read-replica of CARE Database for Metabase to read data from.
Follow the installation steps from the official documentation.
Create a load balancer and expose the service.
Set up an SMTP server. When a Gov is deploying care, the Gov should provide email gateway credentials along with domains.
Set up a Sentry project for error management.
Create a VPC.
Create a PostgreSQL Server. Create database and add PostGIS extension.
Create an Amazon S3 bucket.
Create a private EKS cluster.
Create a EC2 instance as a jump server.
Create a EC2 instance as a bastion host, install VPN, and whitelist its IP in the jumpservers network security group.
Configure a self-hosted runner (linux) in the jumpserver for CI/CD pipelines.
Update configurations for each deployment in their corresponding YAML files.
Create the deployments using the CI/CD pipelines from Github.
Create a EC2 instance in the same VPC as CARE.
Create a PostgreSQL Server. Create a database for Metabase in it.
Create a read-replica of CARE Database for Metabase to read data from.
Follow the installation steps from the official documentation.
Create a load balancer and expose the service.
Deployment on EKS involves quite a few steps to setup. This page provides a link in the bottom to a document that can help in creating the necessary resources on AWS. However, do make sure to read the following once before diving right in as it may contain answers to many queries you may have during the setup:
At all times, the region code for the Mumbai services will be ap-south-1. Make sure to choose this to reduce latency in the services.
The Step 0 in the article requests to have installed AWS IAM Authenticator. Do note that this will not be "necessary".
In Step 1, there has been some changes since the article. Choose "EKS Cluster" as the use case while creating the role. The AmazonEKSServicePolicy is no longer required.
In Step 2 and Step 4, you shall not be able to provide the same URL for uploading the template. Instead manually download the template on your system and upload the template where required.
In the VPC creation step (Step 2), do provide any valid IP ranges from RFC 1918. Make sure your range does not collide with any other existing VPC.
In the Node Group creation step (Step 4), execute aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.16/amazon-linux-2/recommended/image_id --query "Parameter.Value" --output text
on your command line replacing 1.16 with your cluster's kubernetes platform version. Use the value of the output as the Node Image ID.
Follow the steps until and including Step 4 only.
Click here to setup the necessary AWS resources for your EKS deployment.
To create a local development setup of the care backend and frontend projects, perform the following steps.
Install PostgreSQL on your machine. The instructions for the same can be found at https://www.postgresql.org/download/
Change the authentication method to md5
instead of peer
. The instructions for the same can be found in this StackOverflow Answer
Install the GDAL and Geos libraries. ( This is required because Care Uses a PostGis database Connection by default, this is to power location fields which can be later be used for better location-based queries ) On Debian/Ubuntu systems, run sudo apt-get install libgdal-dev libgeos-dev python-dev
Clone the project: $ git clone https://github.com/coronasafe/care
$ cd care
Create a virtual environment: $ virtualenv -p $(which python3) venv
$ source venv/bin/activate
$ pip install -r requirements/local.txt
Set the DATABASE_URL environment variable for your Postgres database. ( By default it expects a database with the name "care", only change this variable if your database is named something else )
Run the database migrations: $ python manage.py migrate
Create a superuser: $ python manage.py createsuperuser
$ python manage.py runserver
The project should be running at http://localhost:8000 API Specs are available at http://localhost:8000/redoc and http://localhost:8000/swagger
Make sure you have Node v10 or above installed on your machine.
Clone the project: $ git clone https://github.com/coronasafe/care_fe
$ cd care_fe
$ git checkout master
$ npm install
$ npm run start
The project should be running at http://localhost:4000 In order to connect the backend to the frontend, modify proxy target for /api
in webpack.config.js
This stage is the actual step where we deploy and run the project on the cluster. Since this will be not be a tutorial on the Kubernetes infrastructure or its advantages, we shall not delve into the details of several files used in this stage.
First, clone the following repository: https://github.com/coronasafe/k8-templates
The repo contains the necessary manifest files required for deploying your care project. The files have been separated into different directories. Take a quick look at the various files in the repo.
We shall now look into what our end deployment looks like.
The requests first come to a LoadBalancer we shall create. This is then balanced across the nginx pods inside our LoadBalancer service. The requests are then routed according to the requested resource. If the route is /api, they are sent to the Backend Service which then distributes it across the Backend pods running one container each with the backend image from ECR. All other requests are sent to the Frontend Service and distributed across the Frontend pods running one container each with the frontend image from ECR.
The services in kubernetes are used to manage deployments. Services are bound to deployments and are responsible for spawning pods using the definitions in the deployments. You can think of deployments like Classes in C++ and pods like Class Instances. The service is also responsible for always making sure a specified number of pods are spawned as defined in it. It is also used to expose the pods inside as a "service" to other services in the cluster. [Syam to Add Docs]
AWS RDS can be used to store your application data. The Mumbai region servers can help in reducing latency for your services. In order to setup an RDS instance, head over to the RDS console and create a new instance. Here are a few things you may need to look out for:
The instance engine must be chosen as Postgres.
Choose the Production template
Choose the settings that are right for your use case.
Make sure you select the VPC you created for EKS and not any other VPC. The VPC will be immutable after the instance creation.
The port may be provided different from the standard port of 5432. This can be found in the 'Additional connectivity configuration' subsection under the 'Connectivity' section. While this does not add a huge security advantage, it may be desired.
Create an initial database inside the instance by giving an 'Initial database name' under the 'Additional configuration' section.
After completion of the instance creation, note the endpoint, port, username, password and the initial database name you provided. Your DATABASE_URL required in the future steps will then be postgres://<username>:<password>@<host>:<port>/<initial database name>
and the POSTGIS_URL will be postgis://<username>:<password>@<host>:<port>/<initial database name>
.
For deploying care its adviced to use the following aws services
The following services from amazon are used to power the project.
Amazon Elastic Kubernetes Service (Eks)
AWS Elastic Computing (EC2) (2 machines)
Amazon RDS | Cloud Relational Database (RDS)
Simple Email Service (Amazon SES)
Simple Notification Service ( Amazon SNS ) for SMS
Amazon ElastiCache for Redis
Simple Storage Service (Amazon S3)
Content Delivery Network (CDN)
Our EKS deployment consists of running containers with docker images of the care project. The images will be served from a private container registry service provided by AWS called Elastic Container Registry (ECR). To setup the repositories for your care frontend and backend images, head over to AWS ECR console and create a new repository each for the backend and frontend.
To build the docker image of the care project, first clone the repositories:
$ git clone https://github.com/coronasafe/care
$ git clone https://github.com/coronasafe/care_fe
Inside the directory of the project you want to build, use the docker build command for building your required images.
Eg.: If you are in the backend folder, run $ docker build -t care:latest .
After the creation of the image, you can verify it by running $ docker images
The following branches are used for dev and production
Care Backend ( care ) :
master -> Development Branch
production -> Stable Production Branch
Care Fron End ( care_fe )
develop -> Development Branch
master -> Stable Production Branch
Always use stable production builds in production, Keep the fork in sync every 3 days.
Always merge the development branch and test before merging production.
Visit the ECR console.
Select the repository you want to push your image to.
On the top right corner, you should be able to find View push commands
Follow the steps there to push your docker image to AWS ECR.
Generic Infrastructure Requirments
High Availability Kubernetes Cluster:
Implement a multi-master Kubernetes cluster with at least three master nodes to ensure high availability and fault tolerance.
Deploy worker nodes across multiple geo-locations to avoid single points of failure.
Persistent Storage:
Set up dynamic storage provisioning using custom storage classes and external storage solutions.
Implement data replication and backup strategies for critical application data, ensuring data integrity and availability.
S3-Compatible Object Storage:
S3-compatible object storage solution with high availability and scalability features.
Configure data lifecycle policies for object versioning, retention, and automatic deletion, requiring careful data management.
Enforce encryption at rest and in transit for all stored objects.
Implement fine-grained access control using bucket policies, IAM roles, and access keys, ensuring only authorized users and applications can access the stored data.
Auto-Scaling:
Implement custom Horizontal Pod Autoscalers (HPAs) with custom metrics. Set up Cluster Autoscaler to dynamically adjust the number of worker nodes based on resource utilization.
Security Policies and Network Policies:
Enforce strict security policies, including PodSecurityPolicies and Network Policies, to control and isolate pods and services.
Custom Ingress Controllers:
Implement custom Ingress controllers for routing and traffic management, including features like header rewriting, SSL termination, and authentication.
Advanced Networking:
Configure a custom CNI (Container Network Interface) plugin with strict network policies to enforce micro-segmentation for maximum security.
Ability to create Network Policy resources to control ingress and egress traffic between pods, making network access more secure.
Custom Resource Definitions (CRDs):
Support for Custom Resource Definition for adding ClusterIssuers for Letsencrypt Certificate Authority or other certificates manager.
SMTP Server/Service:
An SMTP email server to handle email traffic for the domain the Care application is running on.
Role-Based Access Control (RBAC):
Enforce fine-grained RBAC policies, ensuring only authorized personnel can access and manage specific resources within the Kubernetes cluster.
Centralized Logging and Error Detection:
Configure centralized logging with log aggregation and analysis using tools like Sentry.
Secrets Management:
Utilize advanced secret management solutions like HashiCorp Vault or Kubernetes native Secrets Store CSI Driver for secure storage and distribution of sensitive data.
Backup and Disaster Recovery:
Establish a backup and disaster recovery strategy, including off-site backups, data snapshots, and automated failover procedures.
Compliance and Auditing:
Implement Kubernetes audit logging and maintain compliance with industry-specific standards (e.g., CIS Kubernetes Benchmarks) for on-premises deployments.
Documentation and Training:
Exhaustive documentation, training materials, and runbooks for onboarding and maintaining the Kubernetes setup.
Advanced Backup and Restore Procedures:
Implement procedures for backup and restoration of the entire Kubernetes cluster, including etcd data, to ensure data integrity during failures.
Database Cluster Setup:
Deploy a highly available database cluster (e.g., PostgreSQL, MySQL) with multiple read replicas for scalability and fault tolerance.
Data Partitioning and Sharding:
Implement data partitioning and sharding strategies to distribute database load across nodes, requiring careful data modeling and management.
Database Encryption:
Enforce encryption at rest and in transit for database data, utilizing advanced encryption methods and key management.
Database Backups:
Configure automated database backup strategies with incremental and differential backups, ensuring data consistency and reliability.
Automated Failover:
Set up automated failover mechanisms for the database cluster to minimize downtime in case of node failures.
Database Maintenance Jobs:
To ensure database performance, schedule and manage maintenance jobs, such as index optimization, vacuuming, and data archiving.
Database Security Policies:
Enforce strict database security policies, including role-based access control, audit logging, and database-level encryption.
Database Replication Lag Monitoring:
Monitor and manage database replication lag to ensure data consistency across replicas, requiring timely intervention when lag exceeds thresholds.
Database Version Upgrades:
Planned database version upgrades with minimal downtime.
Database Scaling:
Implement auto-scaling policies for the database cluster, dynamically adjusting resources based on workload demand.
Continuous Deployment (CD):
Set up continuous deployment to automatically promote successfully tested changes to production without manual intervention.
Rollback Procedures:
Define rollback procedures and automate them in case of deployment failures or issues in production.
Environment Configuration Management:
Manage environment-specific configurations and secrets separately from the application code.
Monitoring and Alerting:
Track application performance and set up alerts for anomalies in deployments.
This section covers the environment and its required setup for running the care project in production. This involves setting up the secret keys, encryption keys, PostGIS database URLs etc. The project also employs other services such as Redis for caching and Sentry for logging.
The following list contains variables you will need during the production environment.
Variable Key
Required
Description
POSTGIS_URL
Yes
This variable is the URL to your Postgres database with the PostGIS extension. To obtain this, simply replace postgres://
with postgis://
in your DATABASE_URL
DJANGO_ADMIN_URL
Yes
This variable shall be used to access your Django admin dashboard. For security purposes, it is a good idea to keep it as a short random string.
DJANGO_SECRET_KEY
Yes
A secret key used by Django for generating session cookies and other purposes. An online search will provide several methods for generating it.
FERNET_SECRET_KEY
Yes
A secret key used for encrypting patient records in the database. This can be generated in the same way as the Django secret key. A change in the fernet key will cause all data to be corrupted, please make sure this variable is always handled with care. The development version has a hardcoded fernet key to avoid issues.
DJANGO_SETTINGS_MODULE
Yes
This variable specifies which settings to use in the production environment. Set the value to config.settings.production
to point it to the production settings file in the project. development builds can use config.settings.staging
defaults to local settings
USE_S3
No
Set this variable to True
if you want to use Amazon S3 buckets for storing your static files in production. Defaults to 'False'. The backend copies the static files on start, the Gunicorn server serving the backend does not perform well with static files so it is advised to configure S3 as a static file server.
AWS_STORAGE_BUCKET_NAME
No
This variable is used to store the bucket name to store your static files during the collectstatic step. Note that this is used only if USE_S3
is set to True
.
AWS_ACCESS_KEY_ID
No
The AWS Access Key of your account used to access your S3 bucket. Note that this is used only when USE_S3
is set to True
.
AWS_SECRET_ACCESS_KEY
No
The AWS Secret Key of your account used to access your S3 bucket. Note that this is used only when USE_S3
is set to True
.
REDIS_URL
Yes
The URL to your Redis instance for use in caching. Redis is also used as the background job management
CELERY_BROKER_URL
Yes
The URL to your Redis instance for use in celery worker management.
SENTRY_DSN
Yes
GOOGLE_RECAPTCHA_SITE_KEY
No
The configured site key for your recaptcha. Recaptcha is used to prevent brute-force attacks while logging into care.
GOOGLE_RECAPTCHA_SECRET_KEY
No
The secret key for your recaptcha.
DJANGO_ALLOWED_HOSTS
Yes
This is used to store a JSON type array of hosts you want to allow access to your backend API. Requests with other Host fields will not be able to complete successfully. Set it to ['*']
to allow all hosts. Defaults to ['*']
.
CSRF_TRUSTED_ORIGINS
Yes
Contains a JSON array of hosts that are allowed to make cross site requests to the backend API. Defaults to []
.
DJANGO_SECURE_SSL_REDIRECT
No
Use this option to set whether or not you want to redirect from HTTP to HTTPS automatically. Defaults to True
.
RATE_LIMIT
No
A string value of the form requests/time to be set for rate limiting. For eg., if you want to allow not more than 5 requests from a user in 10 mins, provide 5/10m
as the value. Defaults to 5/10m
. Ratelimiting is only enforced for the login and signup endpoint to prevent brute-forcing, after the limit every request required a captcha to be present.
MAINTENANCE_MODE
No
Set this variable to 1 to put the site/API into maintenance. Defaults to 0.
POSTGRES_DB
Yes
Your PostGIS DB name. This is used to check if the postgres db is connected before performing db migrations
POSTGRES_HOST
Yes
Your Postgres host address.
POSTGRES_USER
Yes
Your Postgres username.
POSTGRES_PASSWORD
Yes
Your POSTGRES_USER
password.
POSTGRES_PORT
Yes
Your Postgres instance port number.
SNS_ACCESS_KEY
Yes
AWS SNS access key for sending SMS messages
SNS_SECRET_KEY
Yes
AWS SNS Secret key for sending SMS messages
VAPID_PUBLIC_KEY
Yes
Vapid Public key for sending Web push notifications, Defaults to publicly visible certificates
VAPID_PRIVATE_KEY
Yes
Vapid Private key for sending Web push notifications, Defaults to publicly visible certificates
FILE_UPLOAD_BUCKET
Yes
AWS S3 bucket name with no public access to store confidential patient files
FILE_UPLOAD_KEY
Yes
AWS Access key to access the File Upload Bucket
FILE_UPLOAD_SECRET
Yes
AWS Secret key to access the File Upload Bucket
EMAIL_HOST
Yes
SMTP Email Host
EMAIL_USER
Yes
SMTP Email User
EMAIL_PASSWORD
Yes
SMTP Email Password
The Sentry DSN value for logging errors from your app. To get a free DSN, sign up at .
AWS account with options to spin up services listed in
Use the to configure AWS environment
The following are what you'll need for the setup:
An AWS account with Administrator Access
A Linux machine
A decent internet connection
Git installed on your machine
AWS CLI
Click here for installation instructions.
Kubectl command
Click here for installation instructions.
Once you have the above, move to the next step.
This document is only for reference and may not be strictly adhered to. You may choose to set up a war room to suit your needs with the available resources.
To set up a war-room the infrastructure required is:
3 Large screens
WiFi Requirement: Atleast 100 Mbps speed
Laptops, one each for data entry staff and 3 extra laptops to control the screens.
HDMI cables to connect the screens to the laptops
Ample number of charging points
The setup of the desktop is a fairly straightforward process. This would involve unboxing and installing the UPS, CPU, and Monitor and then Connecting the bundled keyboard and mouse.
This part is slightly tricky. You need to first connect the PC to the internet via a LAN cable to get started. There will be a check for an update that occurs. As soon as this is complete, you should disconnect the LAN cable.
Set up a local account with the default values mentioned below. Device Name: 10BedICU Spoke User: 10BedICU Contact 10BedICU Installation Co-ordinator to set the Password.
Once the account is set up and logged in to windows, reconnect the LAN network and the installation is complete.
Deploying CARE is a process which would at-least take 3 weeks to established. The steps are detailed below:
Week 01
The relevant Government Orders adopting the system must be issued
The district collectors and their teams must be onboarded
The server (preferably at state level) must be set up
The CARE Nodal team at district level needs to be set up
In-depth training to the CARE Nodal team
CARE Nodal team to lead training for field level staff
Week 02
Data entry from hospitals commence (Capacity and inventory management)
Strict enforcement by the Nodal team
Amending deficiencies in training
Start Patient Management module rolled out
Week 03
War room Analytics team gets into action
Weekly reviews for continuous optimisation
Students cadres for engineering and field operations launched
As the technology is set up, it is important to have a system on the field to operationalize the systems to aid the health department in managing the pandemic.
CARE is a web-based Hospital management system with special focus on Capacity augmentation and Load Balancing. It is an Open Source software with an MIT license built by a group of volunteers. It can be utilised by anyone for free and it shall remain so forever.
To learn more about CARE and its features, CLICK HERE.
CARE connects all healthcare facilities within your region (district/state/country) in one network. The critical details of healthcare assets are entered into the system first-hand by each healthcare facility. This Data is projected on the district level dashboards so that the district team (district collector) has all relevant information to take decisions.
Data Flow in CARE:
Each hospital enters data about the 3 things
Patients
Bed capacity and availability
Oxygen availability and consumption
The data entry at each hospital is closely monitored and enforced by a team appointed by the district collector.
A short 6min video of recurring data entry at any hospital is available here.
A War Room (Control Centre) is set up at the district level where the dashboards are viewed to make quick decisions on the field.
The district collector issued an order mandating each hospital (government as well as public) to register within CARE and update information on bed capacity and availability every set interval. The latest order by the Ernakulam district collector is HERE for your reference. The order must have the following points:
Must provide for setting up of a central COVID Control centre (War room) for the district
A CARE Nodal Officer must be appointed.
Must mandate that all hospitals within the district register themselves on CARE within the said date and update details at least every 3 hours.
A time period must be mentioned within which training of all staff in using CARE must be achieved. The Nodal Officer for Training must be responsible for this.
The collector identifies one high level health official (in case of Ernakulam, Dr. Mathews Numpeli, the District Program Manager of Ernakulam, National Health Mission) to be the CARE Nodal officer for deploying the system.
Under the Nodal officer, there are the following officers specifically for specific functions:
CARE Enforcement officer: Ernakulam district is divided into 7 Taluks. The PROs of the Health department at each Taluk is responsible for ensuring compliance of the above mentioned Order in point 1. The PROs from Taluks report straight to the PRO of the district. Law enforcement is engaged to deal with any resistance from the field to ensure data entry.
Nodal Officer for Capacity building and Augmentation: This officer identifies exactly where the maximum load on healthcare is and works towards increasing infrastructure like setting up new COVID treatment centres with the support of government and private contributions.
Nodal Officer for Oxygen: This person heads the committee that manages oxygen. The committee has representatives from the department of industries to identify suppliers and ensure supply, the RTO (enforcement) to ensure logistics for the oxygen and also Law Enforcement to ensure safety while transporting the oxygen.
Nodal Officer for shifting: There is one officer specifically to ensure shifting of patients from one hospital to another happens smoothly on CARE. He must liaison with ambulance managers and the RTO to optimise logistics.
Nodal officer for Training and Human Resource Management : This officer will ensure there are enough data entry operators and that training on using CARE is given to all. This officer also manages engagement of volunteers to manage data.
The MOs in each Panchayats are held responsible to patient management within their jurisdiction.
A district central war-room is set up displaying all the dashboard.
The war room has a team of 15 data entry staff continuously monitoring data entry from the hospitals and immediately identifying any lapse.
A district level shifting team that works around the clock must be set up to operate shifting. The team must at any point in time have at least 2 doctors, and 8 logistics managers to ensure smooth shifting.
There is a Mobile Training Team of 10-15 people for training all users on the field.
There must be one person at each healthcare facility to enter the details regarding
Patients
Capacity and bed availability
Oxygen, supply and consumption
The CARE system facing the hospital staff is very easy to use and efficient. The registration of a hospital can be done within 5 minutes while the periodic updation of data takes up less than 5 min. Click HERE to see a demo of the registration and data entry from hospitals
At panchayat level, the Medical Officers are responsible to update the status of COVID patients under Home Care.
The PROs of health department at each taluk oversees that the data entry happens at each hospital promptly.
There is a team of at least 15 data entry personnels at the district control room to monitor and ensure accuracy of data collected from the field. They also immediately identify if there is any lapse from any hospital.
The CARE Capacity Dashboards shows when each hospital has last updated their information. The Nodal CARE Enforcement Officer will supervise this system.
The Nodal person for Capacity Building at the district control room will identify the areas where capacity is lacking using the CARE Capacity Dashboard. The officer engages with private/public organizations to build capacity by setting up COVID treatment centres or increasing the bed capacity of hospitals.
There is an Oxygen Committee functioning out of the District Control room. The committee comprises of:-
Staff of the Department of Industries: These staff coordinate with the suppliers and the industries in identifying resources.
Regional Transport Officers (Enforcement): The officer arranges for vehicles to transport oxygen in a timely manner
Law Enforcement: Wherever required, the police must be engaged to ensure safety of transportation of oxygen
The Oxygen Modal Officer coordinates with all the above individuals and gets real time data on availability of oxygen in each hospital, the burn rate etc from the CARE Oxygen Dashboard.
There must be a central shifting team comprising of
Doctors: To clinically assess and identify which patient must be shifted to which facility
Logistics Managers: They use the CARE Capacity Dashboards to identify bed availability and arrange for Ambulance for transportation.
NOTE: In Ernakulam, only emergency shifts are executed by the District Central Shifting team. The shifting of patients that are not so severely sick (like shifting a patient from home to a Covid Care Centre etc) are executed at the Taluk Level. If this model is followed, 2 persons at Taluk level to manage local shifting will also be required.
CoronaSafe Network has also developed a logistics management system to control the ambulance systems called the SURAKSHA NETWORK.
Training is conducted through an online certification course on CARE available HERE.
Training material in the form of videos is also available HERE.
A mobile training team of 10-15 members must be set up to conduct targeted training sessions wherever necessary.
Table of Human Resource Needed:
Position
Number
Background/ designation
Data Entry (Central Team)
15
Volunteers/ Data entry staff
Data Entry assistants
In each hospital (need-based)
Volunteers
Panchayat Head to supervise patients under Home Care
One in each panchayat (need-based)
Medical Officers at each panchayat
Shifting Team: Logistics Managers
16 ( split into shifts to work 24 hrs)
volunteers/ Field level staff
Shifting team: Doctors
4 ( split into shifts to work 24 hrs)
Doctors
Shifting: Zonal/Taluk Level
2 in each zone/taluk, based on need
Volunteers/ Field level staff
Training Team
15
volunteers/staff with good communication skill
Zonal Heads
One for each zone/Taluk
Health department zonal heads
Nodal Heads for enforcement, capacity building, oxygen, HR management, and shifting)
5
Senior health officials
CARE Nodal Officer
1
Head of the COVID Task-force from Health Department
The SuperHero Network is an ambulance/ logistics management tool with a web based central management console and aneroid apps to monitor the location of the movable assets and communicate with them.
Engineering Requirement
Coronasafe Logistics Platform (SuperHero App) is made up of 3 components
SuperHero App: Written for Android (Kotlin) and requires at least one experienced Android developer with Kotlin experience
Logistics Dashboard: Written in React.JS/Javascript and can be maintained by someone with React.JS experience
Backend: Written in Node.JS with POSTGRES database. An experienced backend engineer with knowledge in Node.JS/Postgres and Sequelize ORM can maintain the application. Some knowledge in Firebase and AWS is recommended. The backend is currently deployed on AWS EC2 and the Dashboard is deployed on AWS Amplify. Someone with experience in AWS is recommended for this role.
All information on deploying (tech) the SuperHero Network is available HERE.
To operationalise the SuperHero Network, follow the Go-Live Checklist below:
Your SuperHero Network is ready to use!!!
If you’re an expert in networking, you can follow the below 4 TLDR instructions and configure the Monitors Accordingly
Identify the IP, subnet mask, and default gateway by using ipconfig
in command prompt
Once the network is identified, ping suitable IP addresses for the BPL monitors and make sure there is no response to ping for that IP before assigning the IP to each of the monitors.
Assign IPs to the monitors, update the provided Google Sheet with the details[Bed Number, IP address, Subnet Mask, Gateway IP, MAC Number]
You can request the 10BedICU Coordinator for confirmation.
Once you receive the confirmation from the Co-ordinator, you can configure the beds in the CNS software as per the BPL Documentation as well
Press Windows Key ( ⊞ ) + R
From this result, we can identify the LAN Network
From the above information, identify the network address range In this screenshot, we can see that the IP Address is 192.168.0.145 and the available IP range is 192.168.0.2 - 192.168.0.254
Once you’ve identified the IP range, you can start configuring BPL monitor IPs according to the above information
Subnet mask and Gateway IP will be the same for the monitor configuration.
Start assigning IP addresses for the monitors. Check the availability of the IP by simply pinging the IP from the command prompt. Eg: ping 192.168.0.14
In the above example image, you can see that the IP is already assigned to a device and therefore not usable. You can try a different IP until you find a usable IP.
After setting the IP Address on the BPL Monitor, you should restart BPL Monitor for the newly set IP to be used.
After you restart, you can ping the IP that you used to make sure that the monitor is now reachable over the new IP Address.
Once this is done, open the Google sheet for the Hospital you are configuring and fill in the details of the setup you have done. [Bed Number, IP address, Subnet Mask, Gateway IP, MAC Number]
Repeat Steps 9 and 14 to assign IPs for all the BPL monitors.
Once all the Monitors are configured, and all the IP Addresses are updated in the sheet, you can request this to be verified by the 10BedICU Co-ordinator.
After this, you can follow the documentation from BPL to install and set up the CNS software and then add the beds in CNS software.
Type command cmd
Type command ipconfig
You can see the IP address, Subnet Mask, and Gateway IP address.
If this IP is usable, you will get a response like this stating that “Destination host unreachable”, this means that you can use this IP Address for the BPL Monitor.