MySQL Backup Image
The mysql-backup Docker image provides a containerized solution for MySQL database backups with AWS S3 upload support.
Overview
This image is based on MySQL 8.0 and includes additional tools for creating compressed backups and uploading them to AWS S3. It's designed to be used in Kubernetes Jobs for scheduled database backups.
Image Details
- Base Image:
mysql:8.0 - Registry:
registry.gitlab.com/welance/platform/pipelines/container/images/mysql-backup - Purpose: MySQL database backups with S3 storage
Included Tools
The image includes:
- MySQL 8.0: Base MySQL client tools including
mysqldump - Python 3: For running scripts and AWS CLI
- AWS CLI: For uploading backups to S3
- gzip: For compressing backup files
- less: For viewing files
- groff: Required dependency for AWS CLI
Dockerfile
# Base image with mysqldump already available
FROM mysql:8.0
# Install Python, pip, gzip and awscli
RUN microdnf install -y \
python3-pip \
gzip \
less \
groff \
&& pip3 install --no-cache-dir awscli \
&& microdnf clean all
# Optional: show versions for debugging (visible if you run container manually)
RUN mysql --version && aws --version
# Don't define ENTRYPOINT/CMD here because Kubernetes Job will override it with the `command:` you already have.
Usage
Pulling the Image
docker pull registry.gitlab.com/welance/platform/pipelines/container/images/mysql-backup:latest
Running Manually
docker run --rm \
-e MYSQL_HOST=your-db-host \
-e MYSQL_USER=backup-user \
-e MYSQL_PASSWORD=your-password \
-e MYSQL_DATABASE=your-database \
-e AWS_ACCESS_KEY_ID=your-key \
-e AWS_SECRET_ACCESS_KEY=your-secret \
-e S3_BUCKET=your-bucket \
registry.gitlab.com/welance/platform/pipelines/container/images/mysql-backup:latest \
mysqldump -h $MYSQL_HOST -u $MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE | gzip | aws s3 cp - s3://$S3_BUCKET/backup-$(date +%Y%m%d-%H%M%S).sql.gz
Kubernetes Job Example
apiVersion: batch/v1
kind: Job
metadata:
name: mysql-backup
spec:
template:
spec:
containers:
- name: mysql-backup
image: registry.gitlab.com/welance/platform/pipelines/container/images/mysql-backup:latest
env:
- name: MYSQL_HOST
value: "mysql-service.default.svc.cluster.local"
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-credentials
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-credentials
key: password
- name: MYSQL_DATABASE
value: "my-database"
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-credentials
key: access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-credentials
key: secret-access-key
- name: S3_BUCKET
value: "my-backup-bucket"
command:
- /bin/sh
- -c
- |
mysqldump -h $MYSQL_HOST -u $MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE | \
gzip | \
aws s3 cp - s3://$S3_BUCKET/backup-$(date +%Y%m%d-%H%M%S).sql.gz
restartPolicy: Never
CronJob Example (Scheduled Backups)
apiVersion: batch/v1
kind: CronJob
metadata:
name: mysql-backup-daily
spec:
schedule: "0 2 * * *" # Run daily at 2 AM
jobTemplate:
spec:
template:
spec:
containers:
- name: mysql-backup
image: registry.gitlab.com/welance/platform/pipelines/container/images/mysql-backup:latest
env:
- name: MYSQL_HOST
value: "mysql-service.default.svc.cluster.local"
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-credentials
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-credentials
key: password
- name: MYSQL_DATABASE
value: "my-database"
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-credentials
key: access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-credentials
key: secret-access-key
- name: S3_BUCKET
value: "my-backup-bucket"
command:
- /bin/sh
- -c
- |
mysqldump -h $MYSQL_HOST -u $MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE | \
gzip | \
aws s3 cp - s3://$S3_BUCKET/backup-$(date +%Y%m%d-%H%M%S).sql.gz
restartPolicy: OnFailure
Environment Variables
The image doesn't define default environment variables, but typical usage requires:
| Variable | Description | Required |
|---|---|---|
MYSQL_HOST | MySQL server hostname | Yes |
MYSQL_USER | MySQL username | Yes |
MYSQL_PASSWORD | MySQL password | Yes |
MYSQL_DATABASE | Database name to backup | Yes |
AWS_ACCESS_KEY_ID | AWS access key for S3 | Yes (if uploading to S3) |
AWS_SECRET_ACCESS_KEY | AWS secret key for S3 | Yes (if uploading to S3) |
S3_BUCKET | S3 bucket name | Yes (if uploading to S3) |
AWS_DEFAULT_REGION | AWS region | No (defaults to us-east-1) |
Command Override
The image intentionally does not define ENTRYPOINT or CMD to allow Kubernetes Jobs to specify custom commands. This provides flexibility for different backup strategies and use cases.
Typical Backup Command
A typical backup command would be:
mysqldump -h $MYSQL_HOST -u $MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE | \
gzip | \
aws s3 cp - s3://$S3_BUCKET/backup-$(date +%Y%m%d-%H%M%S).sql.gz
This command:
- Creates a MySQL dump using
mysqldump - Compresses it with
gzip - Uploads directly to S3 using AWS CLI (the
-reads from stdin)
Advanced Usage
Backup with Options
mysqldump \
-h $MYSQL_HOST \
-u $MYSQL_USER \
-p$MYSQL_PASSWORD \
--single-transaction \
--routines \
--triggers \
$MYSQL_DATABASE | \
gzip | \
aws s3 cp - s3://$S3_BUCKET/backup-$(date +%Y%m%d-%H%M%S).sql.gz
Multiple Databases
for db in database1 database2 database3; do
mysqldump -h $MYSQL_HOST -u $MYSQL_USER -p$MYSQL_PASSWORD $db | \
gzip | \
aws s3 cp - s3://$S3_BUCKET/${db}-backup-$(date +%Y%m%d-%H%M%S).sql.gz
done
Local Backup (No S3)
mysqldump -h $MYSQL_HOST -u $MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE | \
gzip > /backup/backup-$(date +%Y%m%d-%H%M%S).sql.gz
Prerequisites
- MySQL Server: Accessible MySQL server with appropriate credentials
- AWS Credentials: If uploading to S3, valid AWS credentials with S3 write permissions
- S3 Bucket: If uploading to S3, an existing S3 bucket with appropriate permissions
Security Considerations
- Credentials: Store MySQL and AWS credentials as Kubernetes Secrets, not in plain text
- Network Access: Ensure the container can reach the MySQL server and AWS S3
- IAM Permissions: Use IAM roles with minimal required permissions (S3 write only)
- Encryption: Consider encrypting backups at rest in S3
Notes
- The image uses
microdnf(Red Hat's minimal package manager) for efficient image size - Python packages are installed without cache to reduce image size
- The image is designed to be flexible - no default command means you can use it for various backup strategies
- Version information is displayed when running the container manually for debugging
- The image is optimized for Kubernetes Job usage patterns
- Backups are compressed with gzip to reduce storage and transfer costs