← Back to System Status

It Admin Guide

Dashboard / System Status / Documentation / It Admin Guide

BFS Marketing Automation - IT Administration Guide

For IT Staff Managing the Raspberry Pi System

Last Updated: November 25, 2025 System Location: Raspberry Pi 5 at 10.1.10.110 Dashboard URL: http://10.1.10.110:5000


Table of Contents

  1. System Access
  2. System Overview
  3. Service Management
  4. Subscriptions & Services
  5. Troubleshooting
  6. Maintenance Tasks
  7. Emergency Procedures
  8. Advanced Administration

System Access

SSH Access

IP Address: 10.1.10.110 Hostname: bfs-pi.local (also accessible via this name) Default SSH Port: 22

Connection from Windows:

ssh bfsadmin@10.1.10.110
# or
ssh bfsadmin@bfs-pi.local

Connection from Mac/Linux:

ssh bfsadmin@10.1.10.110
# or
ssh bfsadmin@bfs-pi.local

User Credentials

Username: bfsadmin Password: Stored in BFS IT password management system

Important: This account has sudo privileges for system administration tasks.

Launching Claude Code

Once connected via SSH, you can launch Claude Code to manage the system:

# Navigate to project directory
cd ~/bfs-projects/Marketing

# Launch Claude Code
claude

Note: Claude Code is an AI assistant that can help with system management, troubleshooting, and code updates. It has full access to the project files and can execute commands.


System Overview

Architecture

The BFS Marketing Automation system consists of three main components:

  1. Dashboard (Flask web application)
  2. Runs in Docker container on port 5000
  3. Provides web interface for content approval
  4. Accessible at http://10.1.10.110:5000

  5. Orchestrator (Automated task scheduler)

  6. Runs as cron jobs on the host system
  7. Manages content generation, image generation, scheduling, notifications, analytics
  8. 11 tasks running on different schedules (updated Nov 24, 2025)

  9. Agents (Content generation scripts)

  10. Claude AI-powered content generator
  11. Runs weekly via orchestrator
  12. Creates ~65 social media posts per week
  13. Supports both image posts (80%) and video posts (20%)

Directory Structure

/home/bfsadmin/bfs-projects/Marketing/
├── agents/                    # AI content generators
│   └── content_generator.py   # Main content generation script
├── web/                       # Dashboard application
│   ├── dashboard.py           # Flask backend
│   ├── templates/             # HTML templates
│   └── static/                # CSS, JavaScript, images
├── shared/                    # Shared libraries
│   ├── database.py            # Database management
│   ├── teams_client.py        # Teams notifications
│   ├── sharepoint_client.py   # SharePoint integration
│   └── canva_client.py        # Canva API client
├── orchestrator/              # Automation tasks
│   ├── config.py              # Configuration settings
│   ├── notifications.py       # Notification manager
│   ├── setup_cron.sh          # Cron installation script
│   └── tasks/                 # Individual tasks
│       ├── generate_content.py           # Monday 6 AM
│       ├── generate_pending_images.py    # 3x daily (NEW Nov 20)
│       ├── publish_to_metricool.py       # Hourly (NEW Nov 20)
│       ├── check_approvals.py            # Hourly at :15
│       ├── notify_designers.py           # 3x daily
│       ├── check_designs.py              # 3x daily
│       ├── create_video_folders.py       # Every 5 min (NEW Nov 24)
│       ├── sync_sharepoint_videos.py     # Every 15 min (NEW Nov 24)
│       ├── backup_approved_posts.py      # Daily 11 PM
│       ├── weekly_analytics.py           # Friday 5 PM
│       └── health_check.py               # Every 30 minutes
├── config/                    # Configuration files
│   ├── credentials.env        # API keys (DO NOT COMMIT)
│   ├── credentials.env.example
│   ├── brand_voice.md
│   └── office_config.json
├── data/                      # Database and state
│   ├── marketing.db           # SQLite database
│   └── canva_tokens.json      # Canva OAuth tokens
├── logs/                      # Log files
│   └── orchestrator/
│       └── cron.log           # Cron job logs
├── scripts/                   # Utility scripts
│   └── kill_switch.sh         # Emergency stop
├── Dockerfile                 # Docker container definition
├── docker-compose.yml         # Docker orchestration
├── docker-manage.sh           # Docker management script
└── requirements.txt           # Python dependencies

Important File Locations

File/Directory Purpose Path
Database All content, approvals, analytics (WAL mode) /home/bfsadmin/bfs-projects/Marketing/data/marketing.db
Database WAL Write-Ahead Log file /home/bfsadmin/bfs-projects/Marketing/data/marketing.db-wal
Database SHM Shared memory file /home/bfsadmin/bfs-projects/Marketing/data/marketing.db-shm
Credentials API keys and passwords /home/bfsadmin/bfs-projects/Marketing/config/credentials.env
Dashboard Logs Dashboard application logs Docker logs (see Service Management)
Orchestrator Logs Cron job execution logs /home/bfsadmin/bfs-projects/Marketing/logs/orchestrator/cron.log
Virtual Environment Python packages /home/bfsadmin/bfs-projects/Marketing/venv/

Service Management

Dashboard Service (Docker)

The dashboard runs as a Docker container managed by docker-compose.

Check Dashboard Status:

cd ~/bfs-projects/Marketing
./docker-manage.sh status

Start Dashboard:

cd ~/bfs-projects/Marketing
./docker-manage.sh start

Stop Dashboard:

cd ~/bfs-projects/Marketing
./docker-manage.sh stop

Restart Dashboard:

cd ~/bfs-projects/Marketing
./docker-manage.sh restart

View Dashboard Logs:

cd ~/bfs-projects/Marketing
./docker-manage.sh logs
# or follow logs in real-time:
./docker-manage.sh logs -f

Rebuild Dashboard (after code changes):

cd ~/bfs-projects/Marketing
./docker-manage.sh rebuild

Check Dashboard Health:

# Visit health endpoint
curl http://10.1.10.110:5000/api/status

# Or access in browser
http://10.1.10.110:5000/status

Orchestrator Service (Cron Jobs)

The orchestrator runs as cron jobs on the host system.

View Cron Schedule:

crontab -l

Install/Update Cron Jobs:

cd ~/bfs-projects/Marketing
./orchestrator/setup_cron.sh

Disable All Cron Jobs (without removing them):

# Comment out all BFS Marketing cron jobs
crontab -l | sed 's/^\([^#].*BFS Marketing\)/#\1/' | crontab -

Re-enable Cron Jobs:

# Uncomment all BFS Marketing cron jobs
crontab -l | sed 's/^#\(.*BFS Marketing\)/\1/' | crontab -

Remove All Cron Jobs:

crontab -l | grep -v "BFS Marketing" | grep -v "generate_content.py" | grep -v "check_approvals.py" | crontab -

View Cron Logs:

tail -f ~/bfs-projects/Marketing/logs/orchestrator/cron.log

View System Cron Logs:

grep CRON /var/log/syslog | tail -20

Testing Orchestrator Tasks

All orchestrator tasks can be run manually for testing:

cd ~/bfs-projects/Marketing

# Test health check
python3 orchestrator/tasks/health_check.py --verbose

# Test approval monitoring
python3 orchestrator/tasks/check_approvals.py

# Test designer notifications (force send)
python3 orchestrator/tasks/notify_designers.py --force

# Test design completion check (force send)
python3 orchestrator/tasks/check_designs.py --force

# Test weekly analytics (force run)
python3 orchestrator/tasks/weekly_analytics.py --force

# Test content generation (force run)
python3 orchestrator/tasks/generate_content.py --force

Note: The --force flag bypasses schedule checks and runs immediately. Use for testing only.

Orchestrator Schedule (Updated November 25, 2025)

Task Schedule Purpose Status
generate_content.py Monday 6:00 AM Generate ~65 posts for the week Active
generate_pending_images.py 6 AM, 12 PM, 6 PM Generate DALL-E 3 images for approved posts Active
publish_to_metricool.py Every hour Schedule approved posts to Metricool Test Mode
check_approvals.py Every hour at :15 Monitor pending approvals, send reminders Active
notify_designers.py 6 AM, 12 PM, 6 PM Notify designers of posts needing design Active
check_designs.py 6:45 AM, 12:45 PM, 6:45 PM Notify when designs are complete Active
create_video_folders.py Every 5 minutes NEW: Create SharePoint folders for video uploads Active
sync_sharepoint_videos.py Every 15 minutes NEW: Sync videos from SharePoint to database Active
backup_approved_posts.py Daily 11:00 PM Backup approved posts to SharePoint Pending propagation
weekly_analytics.py Friday 5:00 PM Send weekly performance summary Active
health_check.py Every 30 minutes Monitor system health Active

Total: 11 automated tasks

Manual Trigger API Endpoints (NEW - November 20, 2025)

The dashboard now provides manual trigger buttons for automated workflows. These call API endpoints that run orchestrator tasks immediately:

Manual Content Generation:

curl -X POST http://10.1.10.110:5000/api/manual-content-generation \
  -H "Content-Type: application/json" \
  -d '{"count": 10, "media_type": "all"}'
  • Located on dashboard homepage ("Quick Actions" card)
  • Generates 1-50 posts on-demand
  • Media types: "all", "photos", "videos"
  • Posts created with pending_approval status

Manual Image Generation:

curl -X POST http://10.1.10.110:5000/api/trigger-image-generation
  • Located on "Pending Design" page (green button)
  • Triggers generate_pending_images.py task immediately
  • Processes ALL posts in pending_design status
  • Shows next automated run time with live countdown

Manual Metricool Scheduling:

curl -X POST http://10.1.10.110:5000/api/trigger-scheduling
  • Located on "Approved" page (orange button)
  • Triggers publish_to_metricool.py task immediately
  • Currently in test mode (see Safety Switches below)
  • Shows next automated run time with live countdown

Safety Switches (November 20, 2025)

Located in orchestrator/config.py:

# Multi-Platform Publishing (via Metricool)
METRICOOL_AUTO_PUBLISH = False  # Set True to enable auto-scheduling via Metricool

Test Mode Behavior: - Tasks run normally on schedule or when manually triggered - All logic executes (post selection, API preparation, timing calculation) - NO actual publishing occurs - Comprehensive logging of what WOULD happen - Post statuses NOT updated - No API calls made to Metricool

Enabling Auto-Publish: 1. Test thoroughly with manual triggers in test mode 2. Review logs to verify behavior is correct 3. Edit orchestrator/config.py 4. Change False to True for desired publish target 5. Save file (no restart needed - takes effect on next task run) 6. Monitor first few executions closely 7. Check Facebook/Metricool to verify posts were scheduled

Recommendation: Enable one at a time, test for 1-2 weeks before enabling the other.


Subscriptions & Services

1. Anthropic Claude API

Purpose: AI content generation Service: Claude API (Anthropic) Console: https://console.anthropic.com/

Credential Location: /home/bfsadmin/bfs-projects/Marketing/config/credentials.env Credential Name: ANTHROPIC_API_KEY

Usage: ~65 posts per week = ~260 posts per month Estimated Cost: $5-15/month (varies by model usage)

How to Check Usage: 1. Visit https://console.anthropic.com/ 2. Log in with BFS Anthropic account 3. Navigate to "Usage" section 4. View API usage and costs

How to Rotate Key:

cd ~/bfs-projects/Marketing
nano config/credentials.env
# Update ANTHROPIC_API_KEY=sk-ant-api03-NEW-KEY-HERE
# Save and exit (Ctrl+X, Y, Enter)

# Restart dashboard
./docker-manage.sh restart

2. Microsoft 365 Services

Purpose: Teams notifications, SharePoint storage, SMTP email Services: Azure AD, Microsoft Graph API, Office 365 SMTP Console: https://portal.azure.com/

Credential Location: /home/bfsadmin/bfs-projects/Marketing/config/credentials.env

Credentials: - AZURE_TENANT_ID: Azure AD tenant ID - AZURE_CLIENT_ID: App registration client ID - AZURE_CLIENT_SECRET: App registration secret - SMTP_USER: Email address for sending notifications - SMTP_PASSWORD: Email account password - TEAMS_CHANNEL_EMAIL: Teams channel email address

Azure AD App: BFS Marketing Automation Required Permissions: Sites.ReadWrite.All (Application)

How to Check SharePoint Access:

cd ~/bfs-projects/Marketing
source venv/bin/activate
python3 -c "
from shared.sharepoint_client import SharePointClient
sp = SharePointClient()
try:
    files = sp.list_files('/')
    print(f'✓ SharePoint working - found {len(files)} files')
except Exception as e:
    print(f'✗ SharePoint error: {e}')
"

How to Check SMTP/Teams:

cd ~/bfs-projects/Marketing
source venv/bin/activate
python3 -c "
from shared.teams_client import TeamsNotificationClient
teams = TeamsNotificationClient()
teams.send_notification('Test', 'Test notification from IT admin')
print('✓ Test notification sent to Teams')
"

3. Canva API

Purpose: Automated graphic design (future feature) Service: Canva Developer API Console: https://www.canva.dev/

Credential Location: /home/bfsadmin/bfs-projects/Marketing/config/credentials.env

Credentials: - CANVA_CLIENT_ID: Canva app client ID - CANVA_CLIENT_SECRET: Canva app secret

OAuth Tokens: /home/bfsadmin/bfs-projects/Marketing/data/canva_tokens.json

Status: OAuth complete, integration ready but not yet in production use.

How to Re-authorize Canva (if tokens expire):

cd ~/bfs-projects/Marketing
source venv/bin/activate
python3 -c "
from shared.canva_client import CanvaClient
canva = CanvaClient()
auth_url = canva.get_authorization_url()
print(f'Visit this URL to authorize: {auth_url}')
"
# Follow the authorization flow

4. Metricool API

Purpose: Social media scheduling (future feature) Service: Metricool Console: https://app.metricool.com/

Status: Deferred - manual scheduling used instead Reason: API requires Advanced plan ($43-172/month)

Current Workflow: Manual export to Metricool UI for scheduling.

5. Email Accounts

Marketing Email: marketingbfs@benchmarkfs.org Purpose: Primary contact, notification recipient

Automation Email: automation@benchmarkfs.org Purpose: SMTP sender for Teams notifications

Backup Email: jordan.hoelscher@benchmarkfs.org Purpose: Escalation contact


Troubleshooting

Dashboard Not Responding

Symptom: Cannot access http://10.1.10.110:5000

Diagnosis:

# Check if container is running
cd ~/bfs-projects/Marketing
./docker-manage.sh status

# Check if port is listening
sudo netstat -tuln | grep 5000

# Check container logs
./docker-manage.sh logs

Solution:

# Restart dashboard
cd ~/bfs-projects/Marketing
./docker-manage.sh restart

# If restart doesn't help, rebuild
./docker-manage.sh rebuild

Database Locked Errors

Symptom: "database is locked" errors in logs

Cause: Multiple processes trying to access SQLite database simultaneously

Note: As of November 17, 2025, the database now uses WAL (Write-Ahead Logging) mode, which significantly reduces locking issues. Most concurrent access problems should be resolved.

Solution 1 - Verify WAL mode is enabled:

cd ~/bfs-projects/Marketing
source venv/bin/activate
python3 -c "
import sqlite3
conn = sqlite3.connect('data/marketing.db')
result = conn.execute('PRAGMA journal_mode').fetchone()[0]
print(f'Journal mode: {result}')
if result != 'wal':
    print('WARNING: Database not in WAL mode')
    print('Converting to WAL mode...')
    conn.execute('PRAGMA journal_mode=WAL')
    print('Converted to WAL mode')
conn.close()
"

Solution 2 - Stop dashboard temporarily (if WAL mode doesn't help):

cd ~/bfs-projects/Marketing
./docker-manage.sh stop
# Run your task
python3 orchestrator/tasks/health_check.py
# Restart dashboard
./docker-manage.sh start

Solution 3 - Check for stuck processes:

# Find processes using the database
sudo lsof ~/bfs-projects/Marketing/data/marketing.db

# Also check WAL and SHM files
sudo lsof ~/bfs-projects/Marketing/data/marketing.db-wal
sudo lsof ~/bfs-projects/Marketing/data/marketing.db-shm

# Kill if necessary
sudo kill -9 <PID>

Note on Database Timeouts: The system now uses a 10-second connection timeout and 5-second busy timeout, which should handle most concurrent access scenarios.

Cron Jobs Not Running

Symptom: Automated tasks not executing on schedule

Diagnosis:

# Check cron service is running
sudo systemctl status cron

# Check cron jobs are installed
crontab -l | grep "BFS Marketing"

# Check cron logs
grep CRON /var/log/syslog | tail -20

# Check orchestrator logs
tail -50 ~/bfs-projects/Marketing/logs/orchestrator/cron.log

Solution:

# Restart cron service
sudo systemctl restart cron

# Reinstall cron jobs
cd ~/bfs-projects/Marketing
./orchestrator/setup_cron.sh

# Test task manually
python3 orchestrator/tasks/health_check.py --verbose

Notifications Not Sending

Symptom: Teams notifications not appearing in channel

Diagnosis:

# Check Teams client configuration
cd ~/bfs-projects/Marketing
source venv/bin/activate
python3 -c "
import os
from dotenv import load_dotenv
load_dotenv('config/credentials.env')
print('SMTP_USER:', os.getenv('SMTP_USER'))
print('TEAMS_CHANNEL_EMAIL:', os.getenv('TEAMS_CHANNEL_EMAIL'))
"

# Test notification
python3 -c "
from shared.teams_client import TeamsNotificationClient
teams = TeamsNotificationClient()
try:
    teams.send_notification('Test', 'Test from IT troubleshooting')
    print('✓ Notification sent')
except Exception as e:
    print(f'✗ Error: {e}')
"

Common Causes: 1. SMTP Authentication Failed: SMTP AUTH may need 24-48 hours to propagate after enabling 2. Wrong Credentials: Check SMTP_USER and SMTP_PASSWORD in credentials.env 3. Teams Channel Email Wrong: Verify TEAMS_CHANNEL_EMAIL is correct 4. Firewall Blocking SMTP: Check that port 587 is open for outbound connections

Solution:

# Wait 48 hours after SMTP AUTH enabled (Microsoft propagation)
# Verify credentials
cd ~/bfs-projects/Marketing
nano config/credentials.env
# Update SMTP credentials if needed

# Test SMTP connection manually
python3 -c "
import smtplib
try:
    server = smtplib.SMTP('smtp.office365.com', 587)
    server.starttls()
    server.login('automation@benchmarkfs.org', 'PASSWORD_HERE')
    print('✓ SMTP connection successful')
    server.quit()
except Exception as e:
    print(f'✗ SMTP error: {e}')
"

Content Generator Failing

Symptom: Content generation task fails, no posts created

Diagnosis:

# Check orchestrator logs
tail -50 ~/bfs-projects/Marketing/logs/orchestrator/cron.log

# Test content generator manually
cd ~/bfs-projects/Marketing
source venv/bin/activate
python3 agents/content_generator.py

# Check API key
python3 -c "
import os
from dotenv import load_dotenv
load_dotenv('config/credentials.env')
key = os.getenv('ANTHROPIC_API_KEY')
print(f'API key configured: {key[:20]}...' if key else 'API key NOT configured')
"

# Check database
python3 -c "
from shared.database import get_db
db = get_db()
count = db.get_connection().execute('SELECT COUNT(*) FROM content_library').fetchone()[0]
print(f'Total posts in database: {count}')
"

Common Causes: 1. API Key Invalid: Anthropic API key expired or incorrect 2. API Rate Limit: Exceeded Claude API rate limit 3. Database Issues: Database locked or corrupted 4. Network Issues: Cannot reach Anthropic API

Solution:

# Check Anthropic Console for API key and usage
# Visit: https://console.anthropic.com/

# Rotate API key if needed
cd ~/bfs-projects/Marketing
nano config/credentials.env
# Update ANTHROPIC_API_KEY

# Restart dashboard
./docker-manage.sh restart

# Test again
python3 agents/content_generator.py

Disk Space Issues

Symptom: System running out of disk space

Diagnosis:

# Check disk usage
df -h

# Check database size
du -h ~/bfs-projects/Marketing/data/marketing.db

# Check log sizes
du -h ~/bfs-projects/Marketing/logs/

# Check Docker disk usage
docker system df

Solution:

# Clean Docker images and containers
docker system prune -a

# Rotate/compress old logs
cd ~/bfs-projects/Marketing/logs/orchestrator
gzip cron.log.old

# Archive old database records (if needed)
# See "Database Backups" section below

Maintenance Tasks

Log Rotation

Logs can grow large over time. Rotate them regularly.

Cron Logs:

cd ~/bfs-projects/Marketing/logs/orchestrator
# Rotate log
mv cron.log cron.log.$(date +%Y%m%d)
touch cron.log
# Compress old log
gzip cron.log.$(date +%Y%m%d)

Docker Logs:

# Docker logs are automatically rotated by Docker
# Configure in docker-compose.yml:
# logging:
#   driver: "json-file"
#   options:
#     max-size: "10m"
#     max-file: "3"

Automated Log Rotation (recommended):

# Create logrotate config
sudo nano /etc/logrotate.d/bfs-marketing

# Add this configuration:
/home/bfsadmin/bfs-projects/Marketing/logs/orchestrator/*.log {
    daily
    rotate 30
    compress
    delaycompress
    missingok
    notifempty
    create 644 bfsadmin bfsadmin
}

# Test logrotate
sudo logrotate -f /etc/logrotate.d/bfs-marketing

Database Backups

Manual Backup (WAL Mode):

cd ~/bfs-projects/Marketing/data

# For WAL mode databases, no need to stop dashboard
# WAL mode allows concurrent reads during backup

# Create backup
sqlite3 marketing.db ".backup marketing_backup_$(date +%Y%m%d).db"

# Compress backup
gzip marketing_backup_$(date +%Y%m%d).db

# Store backup in safe location
# Recommended: Copy to network share or cloud storage

# Note: Backup includes all WAL transactions up to backup point

Alternative: Checkpoint WAL Before Backup (for smaller backup files):

cd ~/bfs-projects/Marketing/data

# Checkpoint the WAL file (merge it into main database)
sqlite3 marketing.db "PRAGMA wal_checkpoint(TRUNCATE)"

# Now create backup
sqlite3 marketing.db ".backup marketing_backup_$(date +%Y%m%d).db"
gzip marketing_backup_$(date +%Y%m%d).db

Automated Daily Backup (cron job):

# Add to crontab
crontab -e

# Add this line (runs daily at 2 AM):
0 2 * * * cd /home/bfsadmin/bfs-projects/Marketing/data && sqlite3 marketing.db ".backup marketing_backup_$(date +\%Y\%m\%d).db" && gzip -f marketing_backup_$(date +\%Y\%m\%d).db

Restore from Backup:

cd ~/bfs-projects/Marketing/data
# Stop dashboard
../docker-manage.sh stop

# Restore backup
gunzip -c marketing_backup_YYYYMMDD.db.gz > marketing.db

# Restart dashboard
../docker-manage.sh start

Monitoring Health Checks

The system includes automated health monitoring that runs every 30 minutes.

View Recent Health Checks:

cd ~/bfs-projects/Marketing
source venv/bin/activate
python3 -c "
from shared.database import get_db
db = get_db()
runs = db.get_recent_orchestrator_runs(limit=20)
health_runs = [r for r in runs if r['task_name'] == 'health_check']
for run in health_runs[:5]:
    print(f\"{run['started_at']}: {run['status']} - {run['result_summary']}\")
"

Manual Health Check:

cd ~/bfs-projects/Marketing
python3 orchestrator/tasks/health_check.py --verbose

What Health Check Monitors: - Database size (alerts if > 500 MB) - Disk usage (alerts if > 90%) - Memory usage (alerts if > 90%) - Stuck workflows (alerts if posts waiting > 4 hours) - Recent errors in system log

Updating Configuration

Update Orchestrator Configuration:

cd ~/bfs-projects/Marketing
nano orchestrator/config.py

# Edit settings:
# - APPROVAL_REMINDER_THRESHOLD (hours before reminder)
# - APPROVAL_ESCALATION_THRESHOLD (hours before escalation)
# - POSTS_PER_WEEK_TARGET (expected post count)
# - MAX_DB_SIZE_MB (database size alert threshold)
# etc.

# Save and exit (Ctrl+X, Y, Enter)

# No restart needed - cron jobs read config on each run

Update Dashboard Configuration:

cd ~/bfs-projects/Marketing
nano config/credentials.env

# Update any credentials or settings
# Save and exit

# Restart dashboard to pick up changes
./docker-manage.sh restart

Update Office Configuration:

cd ~/bfs-projects/Marketing
nano config/office_config.json

# Add/remove offices, update social accounts
# Save and exit

# Restart dashboard
./docker-manage.sh restart

Python Package Updates

Check for Updates:

cd ~/bfs-projects/Marketing
source venv/bin/activate
pip list --outdated

Update Specific Package:

source venv/bin/activate
pip install --upgrade package-name

# Update requirements.txt
pip freeze > requirements.txt

Update All Packages (use caution):

source venv/bin/activate
pip install --upgrade -r requirements.txt
pip freeze > requirements.txt

# Test after updating
python3 -c "from shared.database import get_db; print('✓ Database OK')"
python3 -c "from agents.content_generator import ContentGenerator; print('✓ Content Generator OK')"

# Rebuild Docker container
./docker-manage.sh rebuild

Emergency Procedures

How to Disable Automation Immediately

Emergency Kill Switch:

cd ~/bfs-projects/Marketing
bash scripts/kill_switch.sh

This script will: 1. Stop the dashboard Docker container 2. Disable all cron jobs (by commenting them out) 3. Kill any running content generation processes 4. Display confirmation

Manual Emergency Stop:

# Stop dashboard
cd ~/bfs-projects/Marketing
./docker-manage.sh stop

# Disable cron jobs
crontab -l | sed 's/^\([^#].*BFS Marketing\)/#\1/' | sed 's/^\([^#].*generate_content\)/#\1/' | sed 's/^\([^#].*check_approvals\)/#\1/' | crontab -

# Kill any running Python processes
pkill -f "content_generator.py"
pkill -f "orchestrator/tasks"

How to Manually Generate Content

If automation fails and you need to generate content manually:

cd ~/bfs-projects/Marketing
source venv/bin/activate

# Generate content for current week
python3 agents/content_generator.py

# Force regenerate even if content exists
python3 agents/content_generator.py --force

# Check generated content
python3 -c "
from shared.database import get_db
from datetime import datetime, timedelta
db = get_db()
# Get Monday of current week
today = datetime.now()
monday = today - timedelta(days=today.weekday())
week_start = monday.strftime('%Y-%m-%d')
conn = db.get_connection()
posts = conn.execute(
    'SELECT COUNT(*) FROM content_library WHERE week_start = ?',
    (week_start,)
).fetchone()[0]
print(f'Posts generated for week of {week_start}: {posts}')
"

How to Reset Stuck Workflows

If posts are stuck in a particular state:

View Stuck Posts:

cd ~/bfs-projects/Marketing
source venv/bin/activate
python3 -c "
from shared.database import get_db
from datetime import datetime, timedelta
db = get_db()
cutoff = (datetime.now() - timedelta(hours=48)).strftime('%Y-%m-%d %H:%M:%S')
conn = db.get_connection()
stuck = conn.execute('''
    SELECT post_id, status, updated_at
    FROM content_library
    WHERE updated_at < ? AND status NOT IN ('approved', 'scheduled', 'published')
    ORDER BY updated_at
''', (cutoff,)).fetchall()
print(f'Found {len(stuck)} stuck posts:')
for post in stuck:
    print(f'  {post[0]}: {post[1]} (last updated: {post[2]})')
"

Reset Stuck Posts to Pending Approval:

cd ~/bfs-projects/Marketing
source venv/bin/activate
python3 -c "
from shared.database import get_db
db = get_db()
conn = db.get_connection()
cursor = conn.cursor()

# Reset posts stuck in design_complete for > 48 hours
cursor.execute('''
    UPDATE content_library
    SET status = 'pending_approval', updated_at = CURRENT_TIMESTAMP
    WHERE status = 'design_complete'
    AND updated_at < datetime('now', '-48 hours')
''')
affected = cursor.rowcount
conn.commit()
print(f'Reset {affected} stuck posts')
"

Delete Posts and Regenerate:

cd ~/bfs-projects/Marketing
source venv/bin/activate

# Delete all posts for current week
python3 -c "
from shared.database import get_db
from datetime import datetime, timedelta
db = get_db()
today = datetime.now()
monday = today - timedelta(days=today.weekday())
week_start = monday.strftime('%Y-%m-%d')
conn = db.get_connection()
cursor = conn.cursor()
cursor.execute('DELETE FROM content_library WHERE week_start = ?', (week_start,))
cursor.execute('DELETE FROM office_assignments WHERE week_start = ?', (week_start,))
conn.commit()
print(f'Deleted all posts for week of {week_start}')
"

# Regenerate content
python3 agents/content_generator.py --force

How to Contact for Help

Primary Contact: jordan.hoelscher@benchmarkfs.org Purpose: System owner, can answer questions about workflow and business logic

Claude Code: Available via SSH on the Raspberry Pi Purpose: AI assistant with full knowledge of the codebase, can help troubleshoot and fix issues

Anthropic Support: https://support.anthropic.com/ Purpose: Issues with Claude API, rate limits, billing

Microsoft Support: https://support.microsoft.com/ Purpose: Issues with Azure AD, SharePoint, Teams, SMTP


Advanced Administration

Direct Database Access

SQLite Command Line:

cd ~/bfs-projects/Marketing/data
sqlite3 marketing.db

# Useful commands:
.tables                    # List all tables
.schema content_library    # View table schema
SELECT COUNT(*) FROM content_library;  # Count posts
SELECT * FROM content_library WHERE status = 'pending_approval' LIMIT 5;
.exit                      # Exit

Database Schema: - content_library: All generated posts (image and video) - New fields (Nov 17, 2025): media_type, video_concept, video_script, video_shots, required_participants, props_location, estimated_duration, video_url, primary_platform - office_assignments: Post assignments to offices - workflow_state: Automation workflow state - analytics_history: Performance metrics - system_log: Application logs - approval_history: Approval/rejection history - orchestrator_runs: Task execution history

Database Mode: WAL (Write-Ahead Logging) for concurrent access - Allows dashboard and orchestrator tasks to run simultaneously - Reduces database locking errors significantly - Creates -wal and -shm files alongside main database file

Useful Queries:

-- Posts by status
SELECT status, COUNT(*) FROM content_library GROUP BY status;

-- Posts by media type (NEW)
SELECT media_type, COUNT(*) FROM content_library GROUP BY media_type;

-- Video posts with details (NEW)
SELECT post_id, video_concept, estimated_duration, required_participants
FROM content_library
WHERE media_type = 'video'
ORDER BY created_at DESC;

-- Posts by platform (NEW)
SELECT primary_platform, COUNT(*) FROM content_library GROUP BY primary_platform;

-- Recent approvals
SELECT post_id, action, approver, timestamp
FROM approval_history
ORDER BY timestamp DESC LIMIT 10;

-- Rejected posts with reasons (ENHANCED)
SELECT post_id, rejection_reason, status, updated_at
FROM content_library
WHERE status = 'rejected'
ORDER BY updated_at DESC;

-- Recent orchestrator runs
SELECT task_name, status, started_at, duration_seconds
FROM orchestrator_runs
ORDER BY started_at DESC LIMIT 10;

-- Posts created this week
SELECT COUNT(*) FROM content_library
WHERE created_at >= date('now', 'weekday 0', '-7 days');

-- Check WAL mode status
PRAGMA journal_mode;

-- Check database integrity
PRAGMA integrity_check;

Python Virtual Environment

The system uses a Python virtual environment for dependency isolation.

Activate Virtual Environment:

cd ~/bfs-projects/Marketing
source venv/bin/activate
# Prompt will change to show (venv)

Deactivate Virtual Environment:

deactivate

Recreate Virtual Environment (if corrupted):

cd ~/bfs-projects/Marketing
# Stop dashboard
./docker-manage.sh stop

# Remove old venv
rm -rf venv/

# Create new venv
python3 -m venv venv
source venv/bin/activate

# Install packages
pip install --upgrade pip
pip install -r requirements.txt

# Test
python3 -c "from shared.database import get_db; print('✓ OK')"

# Restart dashboard
./docker-manage.sh start

Docker Management

View Docker Containers:

docker ps -a

View Docker Images:

docker images

Remove Old Images (free up space):

docker system prune -a

Access Dashboard Container Shell:

docker exec -it bfs-marketing-dashboard bash
# You're now inside the container
# Navigate to /app for application files
exit

View Container Resource Usage:

docker stats bfs-marketing-dashboard

System Resource Monitoring

Check Disk Space:

df -h

Check Memory Usage:

free -h

Check CPU Usage:

top
# Press 'q' to quit

Check Network Connections:

sudo netstat -tuln

Check Running Processes:

ps aux | grep python
ps aux | grep docker

Git Repository Management

The system is version-controlled with Git.

View Current Status:

cd ~/bfs-projects/Marketing
git status

View Recent Commits:

git log --oneline -10

Pull Latest Changes:

cd ~/bfs-projects/Marketing
git pull origin main

# Rebuild dashboard after pulling changes
./docker-manage.sh rebuild

Commit Changes (if needed):

cd ~/bfs-projects/Marketing
git add .
git commit -m "Description of changes"
git push origin main

IMPORTANT: Never commit config/credentials.env or data/ directory. These are in .gitignore.

Network Configuration

Dashboard Port: 5000 Protocol: HTTP (internal network only)

Firewall Rules (if needed):

# Allow port 5000 on local network only
sudo ufw allow from 10.1.10.0/24 to any port 5000

# Check firewall status
sudo ufw status

Change Dashboard Port (if port 5000 conflicts):

cd ~/bfs-projects/Marketing
nano docker-compose.yml

# Change ports section:
# ports:
#   - "5001:5000"  # External:Internal

# Rebuild
./docker-manage.sh rebuild

# Update DASHBOARD_URL in orchestrator/config.py
nano orchestrator/config.py
# Change DASHBOARD_URL = 'http://10.1.10.110:5001'

Appendix: Quick Reference

Essential Commands

# SSH to system
ssh bfsadmin@10.1.10.110

# Navigate to project
cd ~/bfs-projects/Marketing

# Dashboard management
./docker-manage.sh status|start|stop|restart|logs|rebuild

# View orchestrator logs
tail -f logs/orchestrator/cron.log

# Test orchestrator tasks
python3 orchestrator/tasks/health_check.py --verbose

# Emergency stop
bash scripts/kill_switch.sh

# Database backup
cd data && sqlite3 marketing.db ".backup marketing_backup_$(date +%Y%m%d).db" && cd ..

# Check system status
curl http://10.1.10.110:5000/api/status

File Locations Quick Reference

Project:        /home/bfsadmin/bfs-projects/Marketing/
Database:       /home/bfsadmin/bfs-projects/Marketing/data/marketing.db
Credentials:    /home/bfsadmin/bfs-projects/Marketing/config/credentials.env
Logs:           /home/bfsadmin/bfs-projects/Marketing/logs/orchestrator/cron.log
Virtual Env:    /home/bfsadmin/bfs-projects/Marketing/venv/

Service URLs

Dashboard:      http://10.1.10.110:5000
Status API:     http://10.1.10.110:5000/api/status
Health Check:   http://10.1.10.110:5000/status

Anthropic:      https://console.anthropic.com/
Azure:          https://portal.azure.com/
Canva Dev:      https://www.canva.dev/
Metricool:      https://app.metricool.com/


Recent Updates

November 25, 2025 - Release Management System

Video Workflow Overhaul: - Introduced Release Management system for organizing videos into campaigns - Same video concept can support multiple Releases (national, regional, test) - Edit/Delete release functionality in Video Post Detail page - Videos from deleted releases return to Video Sorter for reassignment

New Database Schema: - video_releases table: Stores release campaigns (release_id, post_id, release_name, release_type, target_date, target_platforms) - video_files table enhanced: Added release_id, video_status, scheduled_date, published_date, platform_scheduled

New Dashboard Routes: - /video-post/<post_id> - Video Post Detail page with Release management - /video-stage2 - Combined view of all video posts past Stage 1 - /api/create-release - Create new release for a post - /api/update-release - Edit release details - /api/delete-release - Delete release (unassigns videos) - /api/get-releases/<post_id> - Get releases for a post - /api/delete-db-video - Delete unassigned database video

Video Sorter Enhancements: - Shows both SharePoint and database videos - Database videos show "Previously assigned video" indicator - Release dropdown populates based on selected Post - Auto-creates "Default Release" if no releases exist

Per-Video Status Tracking: - 🟢 Available - Ready to schedule - 🟡 Scheduled - Marked for specific platform/date - 🔵 Published - Already posted - ⚫ Archived - No longer in use

UI Improvements: - Toast notifications for success/error feedback - Sticky page header with navigation dropdown - Post Information card redesign with sectioned layout - Dark mode fixes for modals and dropdowns


November 20, 2025 - Automated Workflows & Manual Triggers

New Orchestrator Tasks: - generate_pending_images.py - Automated DALL-E 3 image generation (3x daily) - publish_to_metricool.py - Automated Metricool scheduling (hourly, test mode)

New Dashboard Features: - Manual content generation (Quick Actions card on homepage) - Generate 1-50 posts on-demand - Media type selection (all/photos/videos) - API endpoint: /api/manual-content-generation - Automation banners with manual triggers - Pending Design: "Generate Now" button + next run countdown - Approved: "Schedule Now" button + next run countdown - Homepage: Quick Actions content generation - Live countdown timers for all automated tasks - System Status page showing all 9 orchestrator tasks

Configuration Updates: - Safety switches in orchestrator/config.py: - METRICOOL_AUTO_PUBLISH = False (test mode) - New manual trigger API endpoints for immediate workflow execution

Dashboard Updates: - Templates include automation banners (requires Docker rebuild) - "Published" status added to all dropdown menus - Enhanced system status page with orchestrator task details

Important: Template changes require rebuilding Docker image:

docker compose build dashboard && docker compose up -d dashboard

November 17, 2025 - Database & Video Integration

Database Improvements: - Converted to WAL (Write-Ahead Logging) mode for concurrent access - Eliminated most database locking errors - Added 10-second connection timeout and 5-second busy timeout - Refactored approval methods to use single database connection

New Schema Fields: - media_type: 'image' or 'video' (default: 'image') - video_concept: Description of video idea - video_script: Narration or dialogue for video - video_shots: JSON array of shot list - required_participants: Who needs to be in video - props_location: Props needed and filming location - estimated_duration: Target video length - video_url: URL to uploaded video file - primary_platform: Target platform (default: 'Facebook')

New Features: - Content generator creates 80% image posts + 20% video posts - Dashboard displays video badge (🎥) vs image badge (📸) - Platform badges show target social media platform - Enhanced rejection workflow with regenerate/archive buttons - Rejection reasons displayed prominently in rejected posts list - New "archived" status for permanently rejected ideas

Technical Changes: - Database connection management improved (single connection per operation) - WAL mode eliminates most concurrent access issues - Proper error handling for database operations - Better feedback mechanism for rejected posts


Document Version: 1.3 Last Updated: November 25, 2025 Maintained By: BFS IT Department