Managed Care Review is an application that accepts Managed Care contract and rate submissions from states and packages them for review by CMS. It uses a Serverless architecture (services deployed as AWS Lambdas) with React and Node as client/server and GraphQL as the api protocol. The codebase is a TypeScript monorepo. An architectural diagram is also available.
- Managed Care Review Confluence page. Includes an overview of the project, information about planned features, and ADRs (architectural decision records).
./docsfolder. Includes architectural decision records and technical design documents.- OAuth Implementation - Details our OAuth 2.0 implementation for API authentication.
./servicesREADME files. Includes brief summary of the service and key dependencies.- API Changelog includes API schema changes that have entered the codebase since May 2025
- Node.js
- Serverless - Get help installing it here: Serverless Getting Started page.
- pnpm - In order to install dependencies, you need to install pnpm.
- AWS Account - You'll need an AWS account with appropriate IAM permissions (admin recommended) to deploy this app in Amazon.
- NVM - If you are on a Mac using nvm, you should be able to install all the dependencies as described below.
- envrc - Used to set environment variables locally
- docker - Used to run postgres locally
We use a collection of tools to manage this monorepo.
For monorepo tooling we just rely on the workspace configuration of pnpm. If given the -r recursive flag, pnpm will run a package.json script that matches the name given to it in every service. For example, pnpm -r generate will run the generate command in every service that has a generate command specified in it's package.json. We also use Husky to run and organize our pre-commit scripts - e.g. husky uses the command pnpm precommit to run the specific precommit script indicated in each package.json.
To get the tools needed for local development, you can run:
brew install pnpm direnv entr shellcheck detect-secrets
pnpm install huskyWe use direnv to automatically set required environment variables when you enter this directory or its children. This will be used when running the application locally, or when using tools like the aws or serverless CLIs locally.
If you've never setup direnv before, add the following to the bottom of your .bashrc.
if command -v direnv >/dev/null; then
eval "$(direnv hook bash)"
fiIf using zsh, add the following to your .zshrc
eval "$(direnv hook zsh)"After adding, start a new shell so the hook runs.
The first time you enter a directory with an .envrc file, you'll receive a
warning like:
direnv: error /some/path/to/.envrc is blocked. Run `direnv allow` to approve its content
Run direnv allow to allow the environment to load.
# install nvm
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.37.2/install.sh | bash
# load nvm and restart terminal
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
# double check your work.
nvm # should return a list of nvm commands
node -v # should return v12.20.0
which node # should return something like /Users/YOURUSER/.nvm/versions/node/v12.20.0/bin/node
# if things aren't working you may need to manually adjust your ~/.bash_profile or ~/.zshrc. See [nvm docs](https://github.com/nvm-sh/nvm#troubleshooting-on-macos) for more.
# install and use the node version specified in .nvmrc
nvm install
nvm use
# install pnpm for dependency manage
Please see: https://pnpm.io/installation
# run the app and storybook
./dev local
Run all the services locally with the command ./dev local.
See the above Requirements section if the command asks for any prerequisites you don't have installed.
The ./dev script is written in TypeScript in ./src. The entry-point is ./src/dev.ts, it manages running the moving pieces locally: the API, the database, the file store, and the frontend.
Local dev runs services via Docker Compose and a custom Express server that wraps Lambda handlers. The Express server (services/app-api/src/local-server.ts) converts HTTP requests to Lambda events and provides hot reload via nodemon. Infrastructure services (Postgres, S3 via LocalStack, and OpenTelemetry via Jaeger) run in Docker containers managed by docker-compose.yml.
When run locally (with VITE_APP_AUTH_MODE=LOCAL), auth bypasses Cognito. The frontend mimics login in local storage with mock users and sends user info in the cognito-authentication-provider header on every request. The local Express server maps this header into the event.requestContext.identity for lambdas, just like API Gateway would in AWS.
./dev is a program for doing development on Managed Care Review. It can run services locally, run tests, lint, and more. Discover everything it can do with ./dev --help. Anything you find yourself doing as a developer on this project, feel free to add to ./dev.
Run whole app locally
./dev localto run the entire app and storybook- Available flags:
--web,--api,--s3,--postgres,--launch-darklyfor running services individually - (you can also exclude services by using the yargs 'no' standard:
./dev local --no-web)
Run individual services locally
./dev local web./dev local api- etc
Some of those services have their own options as well, namely app-web, see below for more info
Run tests locally
./dev test webto run the web tests, watching the results, requires the database to be running../dev test apito run the api tests, watching the results, requires the database to be running../dev test browserto run the cypress browser based tests, this opens the cypress runner and requires an endpoint to test against. By default, runs on localhost (so you should be running the app locally if this is what you intend). To see options for flags cypress accepts see docs../dev test(ordev test check) to run all the tests that CI runs, once. This will run the web, api, and browser tests, requires the database to be running.- Run with flags
./dev test --unit,.dev test --online, to filter down, but still run once.
Clear and rebuild dependencies
./dev clean&&./dev rebuild
Run storybook
./dev local storybook
Run web app locally, but configured to run against a deployed backend
./dev local web --hybrid- For local dev testing, you should push your local branch to deploy a review app and then
./dev local web --hybridwill connect to that running review app by default. - If you want to specify a different instance to run against, you can set the
--hybrid-stageparameter. For more info about stages/accounts take a gander at the Deploy section below.
Style guide: Any new script added to a package.json file should prefer the format of task:subtask. For example, test, test:once, and test:coverage rather than test_once and test_coverage.
We've had a number of issues only reproduced in cypress being run in GitHub Actions. We've added tooling to dev to run our cypress tests locally in a linux docker container which has been able to reproduce those issues. To do so, you'll need to have docker installed and running and run the app locally with ./dev local like normal to provide the api & postgres & s3 (you could just run those three services if you like). Unfortunately, docker networking is a little weird, so we have to run a separate web in order for the cypress tests to be able to reach our app correctly. That's started with ./dev local web --for-docker. Finally, you can run the tests themselves with ./dev test browser --in-docker. So minimally:
./dev local --api --postgres --s3
./dev local web --for-docker
./dev test browser --in-dockerAnd since this has to run headless b/c it's in docker, you can see how the test actually worked by opening the video that Cypress records in ./services/cypress/videos
We are using Postgres as our primary data store and Prisma as our interface to it. If you want to modify our database's schema, you must use the prisma migrate command in app-api. ./dev prisma forwards all arguments to prisma in app-api.
We describe our database tables and relationships between them in our Prisma schema at /services/app-api/prisma/schema.prisma. If you want to change our database, start by changing that schema file how you like.
For more suggestions about how to run a migration, see how to complete migrations
See main build/deploy here
This application is built and deployed via GitHub Actions. See .github/workflows.
This application is deployed into three different AWS accounts: Dev, Val, and Prod. Anytime the main branch is updated (i.e. a PR is merged) we deploy to each environment in turn. If interacting with those accounts directly, each one will require different AWS keys.
In the Dev account, in addition to deploying the main branch, we deploy a full version of the app on every branch that is pushed that is not the main branch. We call these deployments "review apps" since they host all the changes for a PR in a full deployment. These review apps are differentiated by their Serverless "stack" name. This is set to the branch name, and all infra ends up being prefixed with it to keep from there being any overlapping.
We have a script (getChangedServices) that runs in CI to check if a service needs to be re-deployed due to your most recent commit or if a service can be skipped in order to save CI deploy time. For example, if you're just making changes to app-web, it's likely that you won't need to re-deploy any infra services, such as postgres, after an initial branch deploys. However, if you do need your branch to be fully re-deployed, you can add the string force-ci-run to your commit message and the entire deployment workflow will be run. If you have a failing Cypress container and want to skip over deploying infra and the application, use the string cypress re-run in a commit message and the getChangedServices script will skip you to the Cypress run (unit tests will still run, but it still saves time).
You can see the deployments for review apps here
When a script gets too complicated, we prefer it not be written in Bash. Since we're using typescript for everything else, we're writing scripts in TypeScript as well. They are located in /src/scripts and are compiled along with dev.ts any time you execute ./dev. They can be invoked like node build_dev/scripts/add_cypress_test_users.js
These dependencies can be installed if you are wanting or needing to run aws or serverless sls commands locally.
Before beginning, it is assumed you have:
- Add your public key to your GitHub account
- Cloned this repo locally
The following should install everything you need on macOS:
brew install awscli shellcheckAWS access is managed via Active Directory and Cloudtamer.
In order to run commands against a live AWS environment, you need to configure AWS keys to grant you authorization to do so. You will need this for sure to run the ./dev hybrid command, and might be necessary to run any serverless commands directly by hand.
We can use the ctkey tool to make setting up the appropriate access easier, which is described below.
ctkey is a tool provided by Cloudtamer that allows you to generate temporary
AWS access keys from your CLI/terminal using cloudtamer.cms.gov.
See Getting started with Access Key CLI tool for a link to download the ctkey tool.
Download and unzip the ctkey file onto your local computer. Move the
executable that is applicable to your system (e.g., Mac/OS X) to a directory in
your PATH. Rename the executable to ctkey.
To verify things are working, run:
ctkey --versionMac users: If you get an OS X error about the file not being trusted, go to System Preferences > Security > General and click to allow ctkey.
scripts
├── aws -> ctkey-wrapper
├── ctkey-wrapper
└── serverless -> ctkey-wrapper
ctkey-wrapper is a small bash script that runs the ctkey command to generate your temporary
CloudTamer credentials and exports them to your local environment.
With ctkey-wrapper in place, you can simply run
aws or serverless commands in this directory and ctkey-wrapper manages all
of the ctkey complexity behind the scenes.
First, you'll need to add the following to your .envrc.local:
# the following add's ./scripts to the head of your PATH
PATH_add ./scripts
export CTKEY_USERNAME=''
export CTKEY_PASSWORD=''
export AWS_ACCOUNT_ID=''
Your CTKEY_USERNAME and CTKEY_PASSWORD should be the credentials you use to log in to EUA. They will need to be updated whenever your EUA password expires and has been rotated.
Your AWS_ACCOUNT_ID is the ID of the environment you wish to access locally. Typically, this will be the AWS ID of the dev environment.
Currently, ctkey-wrapper requires the user to be running the openconnect-tinyproxy
container here to connect
to Cloudtamer.
Our project uses Serverless Framework to manage our lambdas and infrastructure. In order to use Serverless v4 in local dev, you'll need to get the license key from our team's 1password space and add it as a value to your .envrc.local under the key SERVERLESS_LICENSE_KEY. This is a new requirement as Serverless went to a paid product in v4.
The dev tool should be installing serverless globally with pnpm install -g serverless@4.2.3, but to verify the tool is accessible just run:
which serverlessThis will output the path to the tool, which is likely installed locally to your ~/Library/pnpm/* path.
We can then verify things are working by running any serverless command, e.g. cd services/app-api && serverless info --stage main. This command should print information and not return any Serverless errors around AWS credentials. If you do receive errors about AWS credentials, the tool may have been accidentally installed as a version > 4.2.3 and you'll need to downgrade to that version. Unfortunately, Serverless decided to break local development by requiring AWS connections in versions after 4.2.3, so we've pinned our version until we can fully move to AWS CDK in Spring 2025.
The Serverless framework calls encapsulated units of lambdas + AWS infrastructure a "service", so we've inherited this terminology from that project. All of our services live under the ./services/ directory. If you need to add a new service to the project, a few things need to happen:
- Add a
serverless.ymlfile to the root directory of this new service. You can copy off of an existing config or run theserverlesscommand in./services/${service-name}to use one of their starter templates. - If this service is going to require js or ts code, you'll want to create a
srcdirectory as well as copy over the appropriatetsconfig.jsonand.eslintrcconfigs. Refer to one of the existing services to get an idea of how we are currently doing this.
You'll need to add this service to our deployment GitHub Actions workflows:
- If it is only infrastructure, it can be added to
./.github/workflows/deploy-infra-to-env.yml. - Services that include application code can be added to
./.github/workflows/deploy-app-to-env.yml. - We have a CI script that skips branch redeploys when possible in
./scripts/get-changed-services/index.ts. Make sure your service is added to that list.
Read more in monitoring documentation.
We currently use the CMS Federal (.us) install of Launch Darkly to manage our feature flags. This can be accessed through LD Federal by providing the email address associated with your EUA account (e.g. @teamtrussworks.com), which will redirect you to CMS SSO.
There are technical design docs about when to add and remove feature flags and how to test with feature flags.
When running locally, a local LaunchDarkly service (services/local-launch-darkly) replaces the real LD endpoints. It starts automatically with ./dev local and provides a UI at http://localhost:3031 for toggling feature flags during development.
On startup, the service can fetch initial flag values from real LaunchDarkly if LD_SDK_KEY is set to a valid key in .envrc.local. Otherwise it falls back to the defaults defined in packages/common-code/src/featureFlags/flags.ts.
Both app-web (via Vite proxy) and app-api (via LOCAL_LD_SERVICE_URL) connect to this local service when VITE_APP_AUTH_MODE=LOCAL.
We welcome contributions to this project. MC Review is an internal CMS tool for facilitating the review of state Medicaid contracts. It is developed by a federal contracting team under contract with CMS and is deployed internally for that purpose. MC Review is built using agile development processes and accepts both issues and feature requests via GitHub issues on this repository. If you'd like to contribute back any changes to this code base, please create a Pull Request and a team member will review your work. While this repository is dedicated primarily to delivering MC Review to the government, if you find any parts of it useful or find any errors in the code we would love your contributions and feedback. All contributors are required to follow our Code of Conduct
See LICENSE for full details.
As a work of the United States Government, this project is
in the public domain within the United States.
Additionally, we waive copyright and related rights in the
work worldwide through the CC0 1.0 Universal public domain dedication.