Configuration
coherence.yml
Just like every other devops/infrastructure tool, we have a .yml
file we're looking for inside each repo you connect to Coherence.
- By default the file is called
coherence.yml
and it can be placed anywhere in your repo for gitops style management. There is also an option to manage the file in the UI instead of in the repo. You can choose this option during initial onboarding - if you'd like to change it once the application is set up, please get in touch! If storing it in the repo and you would like to call the file something else, or change its path, you can change that in the settings section of the application. - It is possible to have multiple configs per repo, to support a monorepo
With this file, you'll configure Coherence to:
- Provision full-stack branch preview environments and Staging/Production environments for gitops across all your services - with support for any application that can be containerized.
- CI/CD Pipelines with build, test, and deploy steps - using first party managed cloud provided tools such as GCP Cloud Build or AWS CodePipeline.
- Supports compiled languages such as
Java
orgo
seamlessly. - Parallelized test steps for increased performance.
- Integrated database seeding and migrations.
- Supports compiled languages such as
- Hosted cloud IDE and cloud shell for each environment
Routing and Domains
Services in one Coherence application are deployed behind a shared load balancer and domain, and use path-based routing to determine the service that will receive traffic for the request.
- Each environment (Staging/Production/Branch Previews or Cloud IDE) creates its own deployment with a unique URL
- Services in the same application will deploy from the same pipeline, with backend services deploying before frontend services.
- Within an application, frontend services can use relative paths to access backend services. No special logic is needed to know the
api
url in an application with a frontend and backend in the sameyml
for example. - If you require distinct lifecycles for your services, seperate repos for your services, or unique domains per service (e.g. app.mysite.com and api.mysite.com) then you can use multiple
coherence.yml
files, and therefore create multiple applications.- Each application's Workspaces will be unique. You can reference another application's services using one of the deployed environments in this scenario, for example the staging environment for a backend from Workspace for an application that is only a frontend.
- Applications can refer to services from other applications using their URLs, which can for example be configured based on Environment Variables. If in the same repo, regex logic based on branch name and subbing the application name in Coherence-generated URLs can be useful.
- If both applications are in the same repo, their deployment lifecycles for push-based deploys are coupled by the github webhooks. Manual deploys will still require action in both application's UI's in Coherence.
- Applications each incur fixed costs for cloud resources such as VPC resources.
Sample coherence.yml
Example
Here is an example coherence.yml
that uses all of the supported settings. See more examples here. This app has 2 services, one frontend
and one backend
.
# this is an advanced option (AWS-only) to run multipe coherence apps in the same VPC
allow_vpc_sharing: true
# service name
frontend:
# service type, required for each service
type: frontend
# option, for a frontend app this is the index file of your SPA
index_file_name: index.html
# optional, defaults to /, what URL path should route to this service
# must be unique for each service in an application
url_path: /
# this is the build context for the container for this service
repo_path: frontend
# where do built assets get put - we will copy this whole directory to the CDN
assets_path: build
# these will be copied from the built docker container into the Workspace
# this setting is only relevant for a Workspace, not a preview or static environment
local_packages: ["node_modules"]
# what command builds assets for this service?
build: ["yarn", "build"]
# an optional array of tests to run in CI/CD
test:
- ["foo", "bar"]
- ["lint", "1"]
# what command to run in the Workspace, for the dev server
# can optionally supply a Dockerfile for the dev container here as well
dev:
command: ["yarn", "dev"]
# this will set the `Access-Control-Allow-Origin` header in the response
cors:
allowed_origins: ["www.example.com"]
# optional config for resources
system:
# these cpu & memory only apply for Workspaces
dev:
cpu: 2
memory: 4G
# service name
backend:
# service type, required for each service
type: backend
# optional, defaults to /, what URL path should route to this service
# must be unique for each service in an application
url_path: /api
# this is the build context for the container for this service
repo_path: backend
# this will add a step to CI for database migrations
migration: ["migration", "command"]
# this will add a step to CI for database seeding
seed: ["seed", "command"]
# what command to run in the Workspace, for the dev server
# can optionally supply a Dockerfile for the dev container here as well
dev:
command: ["run", "command"]
dockerfile: "Dockerfile.dev"
# an optional array of tests to run in CI
test:
- ["foo", "bar"]
- ["foo", "baz"]
# what command to run in a deployed environment
# this will run in previews and static as well, not just prod
# this Dockerfile will be the default for dev if none supplied there
prod:
command: ["run", "command"]
dockerfile: "Dockerfile.prod"
# optional step to compile in a different container than you build into
compile:
image: "foo/bar:1.2.3"
command: ["foo", "bar"]
entrypoint: "foo"
# array of awync workers that consume jobs
# e.g. celery or sidekiq
workers:
# optional, if supplied will run just this 1 worker in a workspace
- name: dev_workspace
command: ["worker", "dev", "command"]
# optional, if supplied will run just this 1 worker in a preview branch
# this is to save $$ in preview deployments
- name: preview_environment
command: ["worker", "preview", "command"]
- name: default queue worker 1
command: ["worker", "command"]
- name: default queue worker 2
command: ["worker", "command"]
# these will be run as Cron tasks in the container runtime (k8s or ECS)
scheduled_tasks:
- name: task 1
command: [“sleep”]
schedule: "* * * * *"
# resources for each environment
resources:
- name: db1
engine: postgres
version: 13
type: database
- name: redis
engine: redis
version: 4
type: cache
# maps to S3 or Google Cloud Storage
- name: test_bucket
type: object_storage
cors:
- allowed_methods: ["GET", "POST", "PUT", "DELETE"]
allowed_origins: ["www.example.com"]
- allowed_methods: ["GET"]
allowed_origins: ["*"]
# optional
system:
# deployed env resource settings
memory: 2G
cpu: 1
# workspace resource settings
dev:
cpu: 4
memory: 3G
# optional, controls scale
# on AWS, these only apply to prod
platform_settings:
min_scale: 2
max_scale: 6
throttle_cpu: true
# controls the machine type used in CI pipelines
build_settings:
platform_settings:
machine_type: "N1_HIGHCPU_8"
# optional, adds integration test step after each deploy to a preview
integration_test:
type: integration_test
command: ["cypress", "run", "--record"]
# this would usually be a 3rd-party supplied image
image: "cypress/included:9.4.1"
# pause preview environments after this long without a push to save $
preview_inactivity_timeout_hours: 72
# (AWS-only) optional config to choose which VPC to share into
vpc_sharing:
app_name: my-first-app
fallback_environment: main
For all services
VPC Sharing
Only supported on AWS at this time.
By default, each application in Coherence will create its own VPC. This means it will create distinct resources such as a load balancer and internet gateway, and also means that it cannot share resources such as database or redis with other apps. If you want, Coherence can launch apps in shared VPCs. The way this works is that you will create one app, configure it to allow vpc sharing, and then create a second app which references that app's name and the environment it will fall back to for resource sharing if there isn't an environment with the same name to reference.
Resource sharing works by injecting the env vars for e.g. DATABASE_URL
from app 1's services into the services in app 2, and allowing app 2 to use the security group rules from app 1. This enables seamless sharing of resources.
The allow_vpc_sharing
boolean is set on the first app in the above flow. The second app puts the yml
:
vpc_sharing:
app_name: my-first-app
fallback_environment: main
This will tell it the name of the first app (which you gave it in the Coherence dashboard), as well as the fallback environment name. In both apps, the yml
must be merged into the app's default branch for Coherence to accept the configuration. Additionaly, apps must both configure the same account ID's for Review
and Production
for VPC sharing to work.
Pausing Preview Environments
The top-level configuration value preview_inactivity_timeout_hours
will configure how long Coherence should keep features active before pausing them. When paused, database instances will be preserved but all other infrastrucutre will be removed. Activity is judged from the most recent push to the branch (regardless of the resulting build status or success). Pushing a new build to a paused feature will-reactive it and re-provision the infrastructure. The status can also be updated in the dashboard at any time. The default is currenly 90 days.
Integration Testing
name_of_your_integration_tests:
type: integration_test
command: ["cypress", "run", "--record"]
image: "cypress/included:9.4.1"
- You can run integration tests as part of your build process in Google Cloud Build or CodePipeline on AWS.
- Include your integration tests as a top level block along with your application's services with a
type
ofintegration_test
- Include the image of your test container, a command to run it, and we will include your tests as a build step.
- Any environment configuration variables that your tests need (
CYPRESS_RECORD_KEY
, for example) can be set using our config UI in Coherence. COHERENCE_BASE_URL
will be set as an environment variable that describes the url of the Coherence environment you are running in. Your tests can make requests to this url.
Build Settings
build_settings:
platform_settings:
machine_type: "N1_HIGHCPU_8"
GCP
You can set the platform_settings
property for machine_type
using the values here for Google Cloud Build to configure the machine type for CI pipelines generated by Coherence.
AWS
You can set the platform_settings
property for machine_type
using the values here for CodeBuild to configure the machine type for CI pipelines generated by Coherence.
Frontend services
Custom headers
Default response headers can be defined for frontend services:
frontend:
type: frontend
...
custom_headers:
- name: x-frame-options
value: SAMEORIGIN
- name: access-control-max-age
value: 86400
Use an existing image
In some cases you may not want or need Coherence to build your docker image.
service_name:
type: backend
...
dev:
command: ["npm", "run", "dev"]
use_existing:
image: docker.io/mydevimage
mode: static
tag: latest
prod:
command: ["npm", "run", "start"]
use_existing:
image: docker.io/myprodimage
mode: static
tag: latest
The use_existing configuration can be added to for prod and/or dev and has the following attributes:
image (required)
The name of the image. If the image is not coming directly from dockerhub be sure to specify the full repository url. e.g. gcr.io/coherence-public/docker-compose
mode (optional, default: static, allowed_values: [static, sha])
The mode dictates how Coherence determines the tag to use for an image.
static mode - In this mode the image will always use the tag attribute as the image tag
sha mode - In this mode the tag used for the image will be the commit sha of the repository/branch. This mode is especially useful if the image build is happening outside of Coherence (e.g. building the image in github actions before the Coherence pipeline runs)
tag (optional, default: latest)
The image tag that will be used. This attribute should not be set when using sha mode.
Additional frontend service considerations
frontend:
type: frontend
build: ["npm", "run", "build"]
assets_path: build
...
prod:
use_existing:
image: docker.io/nginx
mode: static
tag: latest
For a frontend service, the build process in Coherence is typically:
- build the docker image
- start a container using the built image
- run the frontend service build command in the container
- copy assets from
assets_path
to be served by the frontend service
N.B. If an existing image is being used for a frontend service, it will only replace step #1 above. A build
command and assets_path
are still expected to be provided.