- Contributing
- Authentication and authorization
- Release process
- AWS Infrastructure
- Database Development
- CI/CD Pipeline
- SchemaSpy Database documentation
- Growthbook
- Metabase
- Intake Updates Guide
- Setting up a local development environment
- Setting up a local development environment in wsl
- Running PGTap database tests locally
- Running Jest and end to end tests locally
- Pre-commit hooks
- Modifying the database
- Database style rules
- Process to manipulate data in production
- Database backup with pg_dump
Required dependencies:
Have a local instance of postgres running and ensure sqitch can authenticate with postgres. The simplest way to do this is with a .pgpass file in your home directory containing the hostname, port, database name, username and password:
127.0.0.1:5432:postgres:username:password
Once Postgres 14 and Sqitch are setup run the following in the root directory:
make drop_db && make deploy_dev_data
Alternatively you can create the database using createdb ccbc
and then running sqitch deploy
in the /db
directory.
This project uses node-convict
to declare environment variables which can be found in /app/config/index
. Variables for each environment are declared in development.json
, test.json
or production.json
.
The defaults can be overridden using a .env
file placed in the /app
directory.
Required dependencies:
Run the following commands:
$ cd app
$ yarn
$ yarn build:relay
$ yarn dev
This project uses react-relay and relay-nextjs. The yarn build:relay
command above will create the query map schema, create the generated directory to be used by the relay compiler to generate the GraphQl files, and finally build the required persistent operations.
While doing local development you might have to rerun this command when edition previous queries, creating new queries, adding new fragments, etc, otherwise you might notice that query results are not being updated. To do so, simply stop your current development server and run yarn build:relay && yarn dev
.
sudo cpan TAP::Parser::SourceHandler::pgTAP
$ git clone https://github.com/theory/pgtap.git
$ cd pgtap
$ git checkout v1.2.0
$ git branch
$ make
$ sudo make install
$ psql -c 'CREATE EXTENSION pgtap;'
To run the database tests run this command in the project root:
make db_unit_tests
Alternatively you can run single tests with pg_prove
. A test database is required to run which is installed with make db_unit_tests
or by running make create_test_db
.
Once the test database is created you can test a single file by running:
pg_prove -d ccbc_test <path to file>
In /app
directory run yarn test
Cypress and Happo is used for end to end testing. A Happo account and API secret + key is required for Happo testing though it is automatically disabled if no keys exist. Happo is free for open source projects.
To run the end to end tests we need to run our development server in one terminal with the command:
yarn dev
Once that is running in a second terminal run:
yarn test:e2e
This project uses Happo for screenshot testing. Everyone who has contributor access to this repository has access to the Happo tests. If you require admin access needed to modify the project or testing thresholds contact a developer from this project or the CAS team for access.
The resolveFileUpload
middleware is set up to use AWS S3 storage. If no namespace is set and any AWS environment variables are missing the uploads will save to the local system in the /app/uploads
folder.
OPENSHIFT_APP_NAMESPACE
AWS_S3_BUCKET
AWS_S3_REGION
AWS_S3_KEY
AWS_S3_SECRET_KEY
AWS_ROLE_ARN
To enable logging in the development console set the ENABLE_AWS_LOGS
environment variable to true
.
Before releasing our application to our test
and prod
environments, an essential step is to add a tag to our sqitch plan, to identify which database changes are released to prod and should be immutable.
Additionally, to facilitate identification of the changes that are released and communication around them, we want to:
- bump the version number, following semantic versioning
- generate a change log, based on the commit messages using the conventional commits format
To make this process easy, we use release-it
.
The process above has been automated and is automatically performed by the ccbc-service-account on each merge to main. To start a release using the process:
- Find the PR named "chore:release" created by the ccbc-service-account
- Approve the PR and merge it
- Monitor the Action run, it will automatically deploy to dev, followed by test.
- If a prod deployment is needed, approve the release once required.
If a manual release is needed, perform the following steps:
- create a
chore/release
branch - set the upstream with
git push -u origin chore/release
- run
make release
and follow the prompts - create a pull request
- once the pull request is approved, merge using merge button on GitHub UI. Only commits that are tagged can be deployed to test and prod.
If you want to override the version number, which is automatically determined based on the conventional commit messages being relased, you can do so by passing a parameter to the release-it
command, e.g.
yarn release-it 1.0.0-rc.1
As mentioned above, the critical part of the release process is to tag the sqitch plan. While tagging the sqitch plan in itself doesn't change the behaviour of our migrations scripts, it is allows us to know which changes are deployed to prod (or about to be deployed), and therefore should be considered immutable.
We developed some guardrails (i.e. GitHub actions) to:
- ensure that changes that are part of a release are immutable: immutable-sqitch-change.yml
- ensure that the sqitch plan ends with a tag on the
main
branch, preventing deployments if it is not the case. Our release command automatically sets this tag: pre-release.yml
Required dependencies:
The following is to be done in the root directory
Hooks are installed with when running make install_git_hooks
, installing both the python pre-commit hooks as well as a commit-msg hook by cocogitto
Before you can automate the deployment you will to manually deploy using helm so that the deployer account gets created, refer to the deployment script for the command and to the app actions for the environment overrides
To deploy the project into a a new namespace or to deploy another instance of the project into an existing namespace GitHub Environments along with Helm and GitHub Actions is used. The following steps can be used to as a reference to deploy:
-
Create a new environment on the GitHub repository, set any protection rules as necessary. The environment will be used to hold the secrets needed for GitHub Actions to be passed to Helm.
-
Add the following secrets and fill as appropriate:
- AWS_CLAM_S3_BUCKET
- AWS_ROLE_ARN
- AWS_S3_BACKUPS_BUCKET
- AWS_S3_BUCKET
- AWS_S3_KEY
- AWS_S3_REGION
- AWS_S3_SECRET_KEY
- CLIENT_SECRET
- SSO Client Secret
- NEXT_PUBLIC_GROWTHBOOK_API_KEY
- OPENSHIFT_APP_NAMESPACE
- OPENSHIFT_METABASE_NAMESPACE
- Used for NetworkPolicy
- OPENSHIFT_METABASE_PROD_NAMESPACE
- Used for NetworkPolicy
- OPENSHIFT_ROUTE
- OPENSHIFT_SECURE_ROUTE
- OPENSHIFT_TOKEN
- CERT
- CERT_KEY
- CERT_CA
-
Create any updated values as needed for your new deployment under
helm/app
. For example, if you named your environmentfoo
you will create a file namedvalues-foo.yaml
-
Add an extra step to
.github/workflows/deploy.yaml
with updated job and environment name. -
Run the action!
Note: there might be additional modifications or steps required to suit your specific needs, You might need to create independent workflows or Helm charts.
Please refer to CCBC Disaster Recovery Testing with Patroni
In case of a major disaster in which the database volume has been lost refer to Restoring Backup volumes on OpenShift
The project consists of several OpenShift CronJobs to automatically run the following tasks:
Managed by the PostgresCluster Operator (CrunchyDB), performs an incremental database backup at every 4 hours, starting at 1:00AM Pacific Time.
As above managed by CrunchyDB. Performs a full backup of the database everyday at 1:00AM Pacific Time.
Marks all applications for a specific intake as received on the database. Runs twice a day at 10:00 AM and 10:00PM Pacific time.
Sets any applications with status of submitted
to received
.
Prepares attachments for download from the S3 bucket. Runs twice a day at 10:00 AM and 10:00PM Pacific time.
To run any of the CronJobs above manually:
- Get the name of CronJob you want to run by running:
oc get CronJob
- To start the CronJob run:
oc create job --from cronjob/[NAME FROM STEP 1] [YOUR JOB NAME]
. For example assuming the name of the CronJob from step 1 isccbc-pgbackrest-repo1-full
and we want to name our job my-manual-job then we will runoc create job --from cronjob/ccbc-pgbackrest-repo1-full my-manual-job
. - Once ran you should see
job.batch/[YOUR JOB NAME]
created
Note that you cannot run a job with the same name twice, if you need to rerun a job either delete the old job and re run the command from step 2, or use a different name.
Certificates are generated using the standard BC Government process:
Certificates are generated using the standard BC Government process:
-
Create a submission for certificates through MySC.
-
Generate a CSR or use one already generated and provide it when requested. If a new one is needed, you can use the following command:
openssl req -new -newkey rsa:2048 -nodes -out domain.ca.csr -keyout domain.ca.key -subj "/C=CA/ST=British Columbia/L=Victoria/O=Government of the Province of British Columbia/OU=NetworkBC/CN=domain.ca"
replacedomain.ca
with the domain you are generating a certificate for. -
The step above will give you two files,
domain.ca.csr
anddomain.ca.key
. You will only need to share the CSR; the key will be saved in a secret as listed above during deployment. -
Once complete, you will receive a certificate and a chain. Use them in the
CERT
andCERT_CA
fields, respectively. You might also need to updateCERT_KEY
if a new CSR was used. -
Repeat this process for any other certificates you need to renew (e.g., dev, test, etc.).
-
Finally, to update the certificates run the deploy action for each environment that needs updating.
GrowthBook is an open-source platform for feature flagging and A/B testing. In the context of our application, we use it as follows:
- Feature Flagging: Enables or disables functionality per environment (dev, test, and prod).
- Banners: GrowthBook allows setting values on features to be outside of true and false. This allows us to create "objects" and read them in the app to set custom banners.
For detailed instructions on how to add and manage features, please refer to the GrowthBook docs.
To get access to GrowthBook for this project, contact one of the administrators. You will then be able to create and manage feature flags.
Metabase is an open source business intelligence tool that lets you create charts and dashboards using data from a variety of databases and data sources. In the context of our application we use it to generate a variety of visualizations based on the data on the portal as well as other data consumed into our database.
Our Metabase is deployed using Helm, it consists of a Docker image as well as a CrunchyDB (Postgres). The Metabase repo can be found here.
For more information on Metabase, visit the official docs.
For information on queries/questions, refer to this documentation.