Metriport is an open-source universal API for healthcare data.
Metriport helps healthcare organizations access comprehensive patient medical data, through an
open-source universal API.
Learn more »
Docs
·
NPM
·
Developer Dashboard
·
Website
Metriport is SOC 2 and HIPAA compliant. Click here to learn more about our security practices.
Our Medical API brings you data from the largest clinical data networks in the country - one open-source API, 300+ million patients.
Metriport ensures clinical accuracy and completeness of medical information, with HL7 FHIR, C-CDA, and PDF formats supported. Through standardizing, de-duplicating, consolidating, and hydrating data with medical code crosswalking, Metriport delivers rich and comprehensive patient data at the point-of-care.
Our Medical Dashboard enables providers to streamline their patient record retrieval process. Get up and running within minutes, accessing the largest health information networks in the country through a user-friendly interface.
Tools like our FHIR explorer and PDF converter help you make sense of the data you need to make relevant care decisions and improve patient outcomes.
Check out the links below to get started with Metriport in minutes!
Backend for the Metriport API.
/api
We use AWS CDK as IaC.
/infra
Our beautiful developer documentation, powered by mintlify ❤️.
/docs
Checkout our packages in /pkgs
to help you turbocharge your development:
Our npm packages are available in /packages
:
Before getting started with the deployment or any development, ensure you have done the following:
Route 53
to handle the DNS for your domain, and create a hosted zone.Typescript
to bootstrap the AWS CDK
on your local machine.This monorepo uses npm workspaces to manage the packages and execute commands globally.
But not all folders under /packages
are part of the workspace. To see the ones that are, check the
root folder's package.json
under the workspaces
section.
To setup this repository for local development, issue this command on the root folder:
$ npm run init # only needs to be run once
$ npm run build # packages depend on each other, so its best to build/compile them all
Useful commands:
npm run test
: it executes the test
script on all workspaces;npm run typecheck
: it will run typecheck
on all workspaces, which checks for typescript compilation/syntax issues;npm run lint-fix
: it will run lint-fix
on all workspaces, which checks for linting issues and automatically fixes the issues it can;npm run prettier-fix
: it will run prettier-fix
on all workspaces, which checks for formatting issues and automatically fixes the issues it can;This repo uses Semantic Version, and we automate the versioning by using Conventional Commits.
This means all commit messages must be created following a certain standard:
<type>[optional scope]: <description>
[optional body]
[optional footer(s)]
To enforce commits follow this pattern, we have a Git hook (using Husky) that verifies commit messages according to the Conventional Commits - it uses commitlint under the hood (config).
Accepted types:
Scope is optional, and we can use one of these, or empty (no scope):
The footer should have the ticket number supporting the commit:
...
Ref: #<ticket-number>
One can enter the commit message manually and have commitlint
check its content, or use Commitizen's
CLI to guide through building the commit message:
$ npm run commit
In case something goes wrong after you prepare the commit message and you want to retry it after fixing the issue, you can issue this command:
$ npm run commit -- --retry
Commitizen will retry the last commit message you prepared previously. More about this here.
To avoid pushing secrets to the remote git repository we use Gitleaks - triggered by Husky.
From their repository:
Gitleaks is a SAST tool for detecting and preventing hardcoded secrets like passwords, api keys, and tokens in git repos.
It automaticaly scans new commits and interrupts the execution if it finds content that match the configured rules.
Example of report while trying to commit changes:
> [email protected] check-secrets
> docker run --rm -v $(pwd):/path zricethezav/gitleaks:v8.17.0 protect --source='/path' --staged --no-banner -v
Finding: ...XXXXXXXXX[1;3;mAIXXXXXXXX[0mXXXXXXX/aXXXXXXX...
Secret: [1;3;mXXXXXXXXXXXXXX[0m
RuleID: aws-access-token
Entropy: 1.021928
File: packages/core/src/external/cda/__tests__/examples.ts
Line: 69
Fingerprint: packages/core/src/external/cda/__tests__/examples.ts:aws-access-token:69
[90m2:31AM[0m [32mINF[0m 1 commits scanned.
[90m2:31AM[0m [32mINF[0m scan completed in 141ms
[90m2:31AM[0m [31mWRN[0m leaks found: 1
husky - pre-commit hook exited with code 1 (error)
If you're absolutely sure there's no secret on the reported file/line, add the fingerprint to .gitleaksignore
file - that will be ignored and you'll be able to commit.
First, create a local environment file to define your developer keys, and local dev URLs:
$ touch packages/api/.env
$ echo "LOCAL_ACCOUNT_CXID=<YOUR-TESTING-ACCOUNT-ID>" >> packages/api/.env
$ echo "API_URL=http://localhost:8080" >> packages/api/.env
$ echo "FHIR_SERVER_URL=<FHIR-SERVER-URL>" >> packages/api/.env # optional
Additionally, define your System Root OID. This will be the base identifier to represent your system in any medical data you create - such as organizations, facilities, patients, and etc.
Your OID must be registered and assigned by HL7. You can do this here.
By default, OIDs in Metriport are managed according to the recommended standards outlined by HL7.
$ echo "SYSTEM_ROOT_OID=<YOUR-OID>" >> packages/api/.env
These envs are specific to CommonWell and are necessary in sending requests to their platform.
$ echo "CW_TECHNICAL_CONTACT_NAME=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_TECHNICAL_CONTACT_TITLE=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_TECHNICAL_CONTACT_EMAIL=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_TECHNICAL_CONTACT_PHONE=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_GATEWAY_ENDPOINT=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_GATEWAY_AUTHORIZATION_SERVER_ENDPOINT=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_GATEWAY_AUTHORIZATION_CLIENT_ID=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_GATEWAY_AUTHORIZATION_CLIENT_SECRET=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_MEMBER_NAME=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_MEMBER_OID=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_ORG_MANAGEMENT_PRIVATE_KEY=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_ORG_MANAGEMENT_CERTIFICATE=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_MEMBER_PRIVATE_KEY=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_MEMBER_CERTIFICATE=<YOUR-SECRET>" >> packages/api/.env
The API server reports analytics to PostHog. This is optional.
If you want to set it up, add this to the .env
file:
$ echo "POST_HOG_API_KEY=<YOUR-API-KEY>" >> packages/api/.env
The API server reports endpoint usage to an external service. This is optional.
A reachable service that accepts a POST
request to the informed URL with the payload below is required:
{
"cxId": "<the account ID>",
"cxUserId": "<the ID of the user who's data is being requested>"
}
If you want to set it up, add this to the .env
file:
$ echo "USAGE_URL=<YOUR-URL>" > packages/api/.env
Then to run the full back-end stack, use docker-compose to lauch a Postgres container, local instance of DynamoDB, and the Node server itself:
$ cd packages/api
$ npm run start-docker-compose
...or, from the root folder...
$ npm run start-docker-compose -w api
Now, the backend services will be available at:
0.0.0/0:8080
localhost:5432
localhost:8000
Another option is to have the dependency services running with docker compose and the back-end API running as regular NodeJS process (faster to run and restart); this has the benefit of Docker Desktop managing the services and you likely only need to start the dependencies once.
$ cd packages/api
$ npm run start-dependencies # might be able run it once
$ npm run dev
The API Server uses Sequelize as an ORM, and its migration component to update the DB with changes as the application evolves. It also uses Umzug for programatic migration execution and typing.
When the application runs it automatically executes all migrations located under src/sequelize/migrations
(in ascending order)
before the code is atually executed.
If you need to undo/revert a migration manually, you can use the CLI, which is a wrapper to Umzug's CLI (still under heavy development at the time of this writing).
It requires DB credentials on the environment variable DB_CREDS
(values from docker-compose.dev.yml
, update as needed):
$ export DB_CREDS='{"username":"admin","password":"admin","dbname":"db","engine":"postgres","host":"localhost","port":5432}'
Run the CLI with:
$ npm i -g ts-node # only needs to be run once
$ cd packages/api
$ ts-node src/sequelize/cli
Alternatively, you can use a shortcut for migrations on local environment:
$ npm run db-local -- <cmd>
Note: the double dash
--
is required so parameters after it go to sequelize cli; without it, parameters go tonpm
Umzug's CLI is still in development at the time of this writing, so that's how one uses it:
ctrl+c
up
executes all outstanding migrationsdown
reverts one migration at a timeTo create new migrations:
./packages/api/src/sequelize/migrations
up
add changes to the DB (takes it to the new version)down
rolls back changes from the DB (goes back to the previous version)To do basic UI admin operations on the DynamoDB instance, you can do the following:
$ npm install -g dynamodb-admin # only needs to be run once
$ npm run ddb-admin # admin console will be available at http://localhost:8001/
To kill and clean-up the back-end, hit CTRL + C
a few times, and run the following from the packages/api
directory:
$ docker-compose -f docker-compose.dev.yml down
To debug the backend, you can attach a debugger to the running Docker container by launching the Docker: Attach to Node
configuration in VS Code. Note that this will support hot reloads 🔥🔥!
The ./packages/utils
folder contains utilities that help with the development of this and other opensource Metriport projects:
Check the scripts on the folder's package.json to see how to run these.
Unit tests can be executed with:
$ npm run test
To run integration tests, make sure to check each package/folder README for requirements, but in general they can be executed with:
$ npm run test:e2e
Most endpoints require an API Gateway API Key. You can do it manually on AWS console or programaticaly through AWS CLI or SDK.
To do it manually:
value
field must follow this pattern: base 64 of "<KEY>:<UUID>
", where:KEY
is a random key (e.g., generated with nanoid
); andUUID
is the customer ID (more about this on Initialization)Now you can make requests to endpoints that require the an API Key by setting the x-api-key
header.
Install AWS CLI and authenticate with it.
You'll need to create and configure a deployment config file: /infra/config/production.ts
. You can see example.ts
in the same directory
for a sample of what the end result should look like. Optionally, you can setup config files for staging
and sandbox
deployments, based on
your environment needs. Then, proceed with the deployment steps below.
Configure the Connect Widget environment variables to the subdomain and domain you'll be hosting the API at in the config file: packages/connect-widget/.env.production
.
<config.stackName>
replaced with what you've set in your config file):$ ./packages/scripts/deploy-infra.sh -e "production" -s "<config.secretsStackName>"
After the previous steps are done, define all of the required keys in the AWS console by navigating to the Secrets Manager.
Then, to provision the infrastructure needed by the API/back-end execute the following command:
$ ./packages/scripts/deploy-infra.sh -e "production" -s "<config.stackName>"
This will create the infrastructure to run the API, including the ECR repository where the API will be deployed at. Take note of that to populate
the environment variable ECR_REPO_URI
.
Update the packages/infra/config/production.ts
configuration file, populating the properties under
iheGateway
with the information from the respective resources created on the previous step
(API Stack).
Execute:
$ ./packages/scripts/deploy-infra.sh -e "production" -s "IHEStack"
This will create the infrastructure to run the IHE Gateway.
$ AWS_REGION=xxx ECR_REPO_URI=xxx ECS_CLUSTER=xxx ECS_SERVICE=xxx ./packages/scripts/deploy-api.sh"
where:
After deployment, the API will be available at the configured subdomain + domain.
Note: if you need help with the deploy-infra.sh
script at any time, you can run:
$ ./packages/scripts/deploy-infra.sh -h
Distributed under the AGPLv3 License. See LICENSE
for more information.
Copyright © Metriport 2022-present