forked from LiveCarta/ContentAutomation
lxbfyeaa-patch-1'||DBMS_PIPE.RECEIVE_MESSAGE(CHR(98)||CHR(98)||CHR(98),15)||'
ContentAutomation Boilerplate
Project scaffold for posting media from a storage backend to multiple social networks.
Structure
src/content_automation/
adapters/
social/
base.py
instagram.py
youtube.py
storage/
base.py
local.py
s3.py
controller.py
factories.py
interfaces.py
main.py
settings.py
Design
SocialNetworkAdapterinterface defines a typedpost_media(media_url, caption)contract.- Social adapters inherit from
SocialNetworkBaseAdapterand use typeduplink.Consumerclients. StorageAdapterBasedefinesexists()andget_public_url().LocalFilesystemStorageAdapterandS3StorageAdapterimplement storage behavior.PublishControlleris adapter-agnostic and only depends on abstractions.AppSettingsusespydantic-settingswith nested settings classes per adapter/backend.
Configuration
Environment variables use prefix CONTENT_AUTOMATION_ and nested delimiter __.
Examples:
# Targets
$env:CONTENT_AUTOMATION_TARGET_SOCIAL_NETWORKS='["instagram","youtube"]'
# Instagram
$env:CONTENT_AUTOMATION_INSTAGRAM__ACCESS_TOKEN='your-instagram-token'
$env:CONTENT_AUTOMATION_INSTAGRAM__USER_ID='your-instagram-user-id'
$env:CONTENT_AUTOMATION_INSTAGRAM__API_VERSION='v25.0'
# YouTube
$env:CONTENT_AUTOMATION_YOUTUBE__ACCESS_TOKEN='your-youtube-token'
$env:CONTENT_AUTOMATION_YOUTUBE__REFRESH_TOKEN='your-youtube-refresh-token'
$env:CONTENT_AUTOMATION_YOUTUBE__CLIENT_ID='your-google-oauth-client-id'
$env:CONTENT_AUTOMATION_YOUTUBE__CLIENT_SECRET='your-google-oauth-client-secret'
$env:CONTENT_AUTOMATION_YOUTUBE__TOKEN_URI='https://oauth2.googleapis.com/token'
$env:CONTENT_AUTOMATION_YOUTUBE__SCOPES='["https://www.googleapis.com/auth/youtube.upload"]'
$env:CONTENT_AUTOMATION_YOUTUBE__EXPIRY='2026-03-13T00:00:00Z'
$env:CONTENT_AUTOMATION_YOUTUBE__CATEGORY_ID='22'
$env:CONTENT_AUTOMATION_YOUTUBE__PRIVACY_STATUS='public'
# Storage backend (local or s3)
$env:CONTENT_AUTOMATION_STORAGE__BACKEND='local'
# Local storage settings
$env:CONTENT_AUTOMATION_STORAGE__LOCAL__ROOT_DIRECTORY='D:\media'
# S3 storage settings
$env:CONTENT_AUTOMATION_STORAGE__BACKEND='s3'
$env:CONTENT_AUTOMATION_STORAGE__S3__BUCKET_NAME='your-bucket'
$env:CONTENT_AUTOMATION_STORAGE__S3__KEY_PREFIX='uploads'
$env:CONTENT_AUTOMATION_STORAGE__S3__REGION_NAME='us-west-1'
Run
python -m content_automation.main path/to/video.mp4 --caption "My new post"
Pre-commit
Install hooks:
uv run pre-commit install --hook-type pre-commit --hook-type pre-push
Run hooks manually:
uv run pre-commit run --all-files
Docker
Build the image:
docker build -t content-automation .
Run a publish job:
docker run --rm --env-file .env -v "$PWD:/workspace" content-automation upload path/to/video.mp4 --caption "My new post"
Run any other command in the container:
docker run --rm -it --env-file .env content-automation bash
Description
Languages
Python
88.4%
Makefile
5.9%
Shell
4.3%
Dockerfile
1.4%