In the last year, Battlefy has seen a paradigm shift towards package focused development. When building a package that is going to be used across an organisation, that package needs to be properly tested and documented. Ideally, the tests and documentation live along side the package code itself, so that everything stays in sync as the code changes. The tests need to be run against in-development code (as well as blocking release if they fail), and ideally a good chunk of the documentation is automatically generated directly from the package code. It's also nice to be able to snapshot and preserve documentation at the point of release.
However, this begs the question - how do we manage this in a single repo? When we publish the package, we don't want to also include all the tests and documentation. This blog post aims to outline an approach for structuring monorepos to support this workflow, while also allowing us to structure the packaged code in whatever way we decide is best.
As an example, consider a design system component package. The package comprises a series of React components that implement a design system. We want to expose components conveniently in a tree-shakable manner. For example,
We also want to provide interactive visual documentation. Battlefy has had a lot of success with Storybook. This interactive documentation should be available to anyone on the web, and we should be able to share in-development versions of the package to get feedback from stakeholders.
Finally, we want to automate our processes. Pushing a branch should run tests, then build and deploy documentation for that branch for review. Merging a new package version into our production branch should run tests, create a git tag of the release commit, copy off the documentation somewhere permanent, and publish the package to npm. Automating things in this way makes these processes more convenient, as well as preventing us from making human mistakes.
Our project will be structured like so:
Our repos source code is split in two.
package
contains our actual package code, along with the stories and tests. Our project is organised with the "co-locate by concern" principle, which means stories and tests live in the same folder as the components themselves. Many of our components will likely have sub-folders for storing things like component specific helpers, sub-components, and component variants. Each component is also re-exported from the main package/index.ts
file.
This application will be deployed to an AWS Lambda function, and allows users to browse the list of released versions, as well as see the in-development branches. It will also serve the statically generated Storybook files. Generally speaking, we're okay with our in-development iterations being public - there's no sensitive information in there, and securing them requires extra hoops both for us as the package maintainers and our users who want to access them. However, we can easily modify this Lambda in future to integrate with our Auth provider and restrict access to certain routes.
We'll need a fair few scripts in package.json
to support our intended behaviour. To recap, here's what we want to achieve:
To build the documentation browser, we can use esbuild
to generate a single JavaScript file that can be deployed to our AWS Lambda function. The docs browser code uses the AWS SDK to serve files from S3, and this package is always available to all Lambda functions. This means we can mark that dependency as external to reduce our bundle size.
Once the bundle is generated, we can update our Lambda function's code using the update-function-code
CLI command. We could also use the node SDK, or even the AWS Cloud Development Kit.
This process will be manual for now, since we want to control explicitly when we update the docs browser. An alternative approach could be to use workspaces to separately version our docs browsers, as well as provisioning staging environments, or integrating the entire application deployment process using AWS CDK.
There are two workflows in this project, one responsible for releasing documentation, and another for releasing new versions of the package.
The .github/workflows/release.yml
workflow handles releasing a new version of the package. This workflow runs against all merges to master. It reads the current version number from package.json
. It then uses npm show [package-name] version
to get the current latest version of the package published to the npm
registry. If these are different, we know we need to publish a new version of the package. This assumes that the package version number only ever increases, or at least that it never goes back to an old version, which is a fairly safe assumption for now.
The package is published using a special shell script. This shell script uses an npm
script to build the package using tsc
(so that we keep our files separate to facilitate code splitting/tree shaking), then copies only the files we want to publish into a special folder, cd
s into that folder, and runs npm publish
. This is to ensure that we have complete control over which files get published. The .npmignore
file is also written to ensure that story files and test files are never published.
Once the new package version has been published successfully, we also create a git tag with the same name as the current version and push that to GitHub. This allows us to conveniently create a permanent record of the released version's source code, as well as signalling the docs release workflow to run.
The .github/workflows/docs.yml
workflow is responsible for building storybook documentation. When a branch or a tag is pushed, we check out that branch/tag, build storybook, and then sync the generated folder to an S3 bucket using the CLI. The branch name or the tag name is used as the base folder name. These S3 files can then be served via the Lambda function detailed above.
In this blog post, we've seen one possible, very simplistic, way of handling a monorepo that can handle various kinds of deployments. We've also briefly discussed possible ways of making the system more sophisticated using AWS CDK. If you'd like to see more articles discussing things in more detail with more concrete code examples, please reach out on twitter.
If you like monorepos and package focused development, or if you have some amazing ways of improving this setup, Battlefy is hiring!