# Migrating from MinIO to garage

Table of Contents

Migrating to Garage

Background

Since a few years my GitLab instances and some other services required S3 for distributed file storage.

At the time, MinIO was the obvious choice as it had solid S3 support and a good Web UI. The data on it has now been migrated over three instances, from a docker container to bare-metal on Debian to the final destination as a NixOS services on my NAS.

However, they recently yanked the access management via the Web UI from the OSS edition and moved it to their commercial offering. This part of the WebUI was the one I used the most and at least for me was enough motivation to look for alternatives.

I do understand the motivation from their side to push people towards their enterprise offering, given how popular the free edition of MinIO has become. However, the prospect of MinIO pulling access management features in the future lower it’s long-term reliability a lot for me.

Assessment

The first step of the migration was assessing:

  1. What consumers are present?
  2. What do the consumers require?

The 1. was relatively simple thanks to existing documentation:

GitLab is relatively easy to manage as all buckets can stay private and only services need to access it. However, both outline and plane expose assets to uses, either by signed URLs or via a proxied route.

Assembley

Tooling

The minio-client, mc, was installed for data migration. For S3 API interations, specifically CORS policy migration, the awscli was installed.

If you have nix installed:

  • use nix shell nixpkgs#awscli nixpkgs#minio-client
  • or nix-shell -p awscli minio-client

Reducing scope

Both Plane and Outline were reconfigured to use alternative storage methods. This simplied all other steps as at least plane prefers bucket policies.

Preparing the destination

The garage service was deployed as outlined in the Quick Start Guide. This was relatively quick on my NixOS server for a single-node installation.

Preparing the source

For migrating the data, we need to be able to read it. In this case we want to acquire an access key that can read all MinIO buckets.

A new identity was created for this and a key with the readonly policy template worked fine here. This identity can be removed after the migration.

Action

The data migration steps below likely will mean some downtime for your services. It is not covered directly here, but another step was to coordinate service maintenance with this data migration.

For GitLab this meant:

  • a maintenance window for a GitLab upgrade was also used to migrate pages data
  • all runners were paused and drained of jobs before the cache was migrated

The draining of runners was likely not required as they can also work with disjunct caches. It was however the easier step to ensure good cache availability for all jobs.

For Attic the service had a small downtime while data was being migrated.

Configuring the destination

All relevant buckets were recreated based on the quick start guide.

This effectively boiled down to:

Terminal window
garage bucket create <bucket>
garage key create <identity>
# NOTE: down key id/ access key for service
garage bucket allow --read|--write <bucket> --key <identity>
# either --read and/or --write

As bucket/acl policies are not supported this was all steps required for the service identities temselves.

Another key with read and write priviledges for all buckets was also created for the migration. It will be used in the next step and can be removed with garage key delete <key> --yes afterwards.

Migrating data

For interacting with S3 storages, both minio and garage, the minio CLI was choosen. Mainly because its mirror subcommand makes it trivial to fully replicate the contents of a bucket.

The migration key id/access key for minio is required for the next step, an alias for the mc cli can be created:

Terminal window
mc alias set source https://<minio> <key id> <secret key>

For garage, another alias with the priviledged migration key was created:

Terminal window
mc alias set garage https://<garage> <migration key id> <migration secret key>

You can test both aliases with mc ls <alias>. They should both list the buckets for each endpoint.

The next step is migrating the objects.

You can now mirror each bucket with: mc mirror source/<bucket> garage/<bucket>

Porting CORS policies

Some buckets require CORS policies. Especially when a service uses signed URLs or similiar this needs to be trasferred for functional operation.

For this step the awscli was used. The same credentials as the mc aliases were used here.

First, the old policy was pulled from the source bucket and saved to /tmp/cors:

Terminal window
aws --endpoint https://<minio> s3api get-bucket-policy --bucket <bucket> > /tmp/cors

And then pushed to the new bucket:

Terminal window
aws --endpoint https://<garage> s3api put-bucket-cors --bucket <bucket> --cors-configuration file:///tmp/cors

This has to be done on a per-bucket basis.

Business cat avatar

Thanks for reading my blog post! Feel free to check out my other posts or contact me via the social links in the footer.


More Posts