Your MySQL database served you well, but now you're hitting its limits. You've heard about PostgreSQL's superior JSON support, better transaction handling, and more advanced features. The question isn't whether to migrate—it's how to do it without losing data or crashing your application.
This guide walks you through a real-world migration scenario: moving an existing MySQL database to PostgreSQL on Oracle Linux 8. You'll learn schema conversion strategies, handle data type incompatibilities, validate data integrity, and execute the migration with minimal downtime.
After reading this guide, you'll confidently migrate your MySQL databases to PostgreSQL and verify everything worked correctly.
Table of Contents
- Why Migrate from MySQL to PostgreSQL?
- Pre-Migration Checklist
- Step 1: Assess Your MySQL Database
- Step 2: Set Up Your PostgreSQL Target Database
- Step 3: Export MySQL Schema
- Step 4: Convert MySQL Schema to PostgreSQL
- Step 5: Migrate Data
- Step 6: Validate Data Integrity
- Common Migration Mistakes
- Troubleshooting
- Post-Migration Steps
Why Migrate from MySQL to PostgreSQL?
MySQL and PostgreSQL serve different needs. MySQL excels at simple, read-heavy operations. PostgreSQL shines when you need JSONB documents, complex transactions, advanced data types, or sophisticated queries.
Real reasons teams migrate:
- JSONB Support — Store and query JSON documents natively, not as strings
- Better Transaction Handling — ACID compliance with proper isolation levels
- Advanced Features — Window functions, CTEs, full-text search, custom types
- Scalability — PostgreSQL handles larger datasets and concurrent users more efficiently
- No Licensing Costs — Both are free, but PostgreSQL is more enterprise-ready
The migration process itself isn't complex, but the devil is in the details. MySQL and PostgreSQL have different data types, syntax, and behavior. This guide handles those differences for you.
Pre-Migration Checklist
Before you start, make sure you have everything in place:
Prerequisites:
- MySQL database running on Oracle Linux 8 (see our complete MySQL 8.4 installation guide )
- PostgreSQL installed and running on Oracle Linux 8 (see our PostgreSQL 16 installation guide )
- Access to root/sudo on both MySQL and PostgreSQL servers
- Backup of your MySQL database (critical safety measure)
- List of all MySQL users and their privileges
- Knowledge of which tables are mission-critical
- Planned downtime window (even 30 minutes helps)
Backup Your MySQL Database Immediately:
mysqldump -u root -p --all-databases > mysql_backup_$(date +%Y%m%d).sql
# This creates a complete backup before any changes
Step 1: Assess Your MySQL Database
Before migrating, understand what you're working with. This prevents surprises mid-migration.
Check database size:
mysql -u root -p -e "SELECT table_schema AS 'Database',
ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS 'Size (MB)'
FROM information_schema.tables GROUP BY table_schema;"
Expected output:
Database Size (MB)
information_schema 0.15
mysql 1.25
your_database 145.30
List all tables and their row counts:
mysql -u root -p your_database -e "SELECT table_name, table_rows,
ROUND(((data_length + index_length) / 1024 / 1024), 2) AS 'Size_MB'
FROM information_schema.tables WHERE table_schema = 'your_database';"
Why this matters: Large tables (>1 GB) may need special handling. Tables with millions of rows benefit from staging migrations.
Check for MySQL-specific features:
mysql -u root -p your_database -e "SHOW CREATE TABLE your_table\G"
Look for:
- Auto-increment columns (different syntax in PostgreSQL)
- ENUM types (PostgreSQL uses a different approach)
- Full-text indexes (PostgreSQL uses a different method)
- Stored procedures (syntax differs significantly)
Document your findings:
Create a simple spreadsheet with:
- Table name
- Row count
- Size (MB)
- Key columns
- Special features (ENUM, full-text search, etc.)
This becomes your migration checklist.
Step 2: Set Up Your PostgreSQL Target Database
PostgreSQL should already be installed. Now prepare it for data import.
Create a migration user in PostgreSQL:
sudo -u postgres psql
CREATE USER migration_user WITH PASSWORD 'strong_password_123';
ALTER USER migration_user CREATEDB;
\q
Create the target database:
sudo -u postgres createdb -O migration_user your_database
Verify connection from your user:
psql -h localhost -U migration_user -d your_database
# Type password when prompted
# If you see "your_database=>" prompt, connection works
# Type \q to exit
Key takeaway: PostgreSQL uses role-based access control. Create a dedicated migration user to isolate changes.
Step 3: Export MySQL Schema
Exporting just the schema (structure without data) helps you test conversion before migrating millions of rows.
Export MySQL schema only:
mysqldump -u root -p --no-data your_database > schema_only.sql
Verify the export:
wc -l schema_only.sql
# Should show hundreds or thousands of lines depending on complexity
head -50 schema_only.sql
# Should show CREATE TABLE statements
Save this file safely:
cp schema_only.sql schema_only_backup_$(date +%Y%m%d).sql
Why schema-only first? You can test conversion without waiting for terabytes of data to export. Find and fix issues before doing the full migration.
Step 4: Convert MySQL Schema to PostgreSQL
This is where MySQL and PostgreSQL syntax differ. We'll handle the conversion systematically.
Understanding the Differences
MySQL syntax that breaks in PostgreSQL:
| MySQL | PostgreSQL | Example Fix |
|---|---|---|
| AUTO_INCREMENT | SERIAL or GENERATED | id INT AUTO_INCREMENT → id SERIAL PRIMARY KEY |
| ENUM('a','b') | VARCHAR or custom type | status ENUM('active','inactive') → status VARCHAR(20) |
| Backticks for names | Double quotes or no quotes | `user` → "user" or user |
| UNSIGNED INT | INTEGER (no unsigned) | count INT UNSIGNED → count INTEGER CHECK (count >= 0) |
| TEXT limitations | Full BLOB support | Works the same (PostgreSQL handles better) |
| TIMESTAMP DEFAULT CURRENT_TIMESTAMP | Same syntax | Behavior differs slightly |
| ENGINE=InnoDB | Not supported | Remove this line |
Manual Conversion (Recommended for Complex Schemas)
Step 1: Open the schema file:
nano schema_only.sql
Step 2: Remove MySQL-specific statements:
Search and delete these lines:
/*!40101 SET @OLD_...(version control comments)DROP TABLE IF EXISTS(optional — keep for safety)ENGINE=InnoDB(remove)DEFAULT CHARSET=utf8mb4(remove)
Step 3: Convert data types:
Find this:
CREATE TABLE users (
id INT AUTO_INCREMENT PRIMARY KEY,
status ENUM('active','inactive') DEFAULT 'active',
email VARCHAR(255) NOT NULL UNIQUE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
Replace with this:
CREATE TABLE users (
id SERIAL PRIMARY KEY,
status VARCHAR(20) DEFAULT 'active',
email VARCHAR(255) NOT NULL UNIQUE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
Step 4: Create PostgreSQL-specific enhancements (optional):
After creating base tables, add constraints PostgreSQL supports:
-- Add CHECK constraint for ENUM-like behavior
ALTER TABLE users ADD CHECK (status IN ('active', 'inactive'));
-- Create INDEX for performance
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_users_status ON users(status);
Automated Conversion Tool (For Large Schemas)
If your schema is massive, use pgloader:
# Install pgloader
sudo dnf install -y pgloader
# Create a pgloader configuration file
cat > migration.load << 'EOF'
LOAD DATABASE
FROM mysql://root:password@localhost:3306/your_database
INTO postgresql://migration_user:password@localhost:5432/your_database
WITH skip constraints, data only
BEFORE LOAD DO
$$ DROP SCHEMA IF EXISTS public CASCADE; $$,
$$ CREATE SCHEMA public; $$
AFTER LOAD DO
$$ ALTER SCHEMA public OWNER TO migration_user; $$;
EOF
# Run pgloader
pgloader migration.load
Why pgloader? It handles AUTO_INCREMENT conversion, ENUM detection, and other differences automatically. Perfect for complex migrations.
Key takeaway: Test your converted schema by importing it into a test PostgreSQL database first. Never import to production without testing.
Step 5: Migrate Data
Now that your schema is ready, migrate the actual data. Choose the method based on your database size.
Method 1: Small Databases (< 1 GB) – Direct Export/Import
Export all MySQL data:
mysqldump -u root -p --no-create-info your_database > data_only.sql
Import to PostgreSQL:
psql -h localhost -U migration_user -d your_database -f data_only.sql
Monitor progress:
# In another terminal, watch table row counts
watch -n 5 'psql -h localhost -U migration_user -d your_database -c "
SELECT table_name, n_live_tup FROM pg_stat_user_tables ORDER BY n_live_tup DESC;"'
Method 2: Large Databases (> 1 GB) – Staged Migration with pgloader
Most reliable for production migrations.
Create a pgloader configuration:
cat > migration_staged.load << 'EOF'
LOAD DATABASE
FROM mysql://root:password@localhost:3306/your_database
INTO postgresql://migration_user:password@localhost:5432/your_database
WITH
batch size = 100000,
batch rows = 50000,
prefetch rows = 100000
BEFORE LOAD DO
$$ DROP SCHEMA IF EXISTS public CASCADE; $$,
$$ CREATE SCHEMA public; $$;
EOF
Run migration with logging:
pgloader --verbose --logfile migration.log migration_staged.load
Monitor the log in real-time:
tail -f migration.log
Expected output:
Starting pgloader, log system is ready.
[COPY] Copying data from MySQL to PostgreSQL
[LOAD] Processed 1,000,000 rows (42 seconds, 23,809 rows/sec)
[LOAD] Processed 2,000,000 rows (84 seconds, 23,810 rows/sec)
Migration complete: 2,345,678 rows loaded in 98 seconds
Method 3: Very Large Databases (> 10 GB) – Parallel Table Migration
Split the migration into parallel processes:
#!/bin/bash
# migrate_tables.sh - Migrate each table in parallel
TABLES=$(mysql -u root -p your_database -e "SHOW TABLES" -sN)
for TABLE in $TABLES; do
echo "Migrating table: $TABLE"
(
mysqldump -u root -p --no-create-info your_database $TABLE | \
psql -h localhost -U migration_user -d your_database
) &
done
wait
echo "All tables migrated"
Run it:
bash migrate_tables.sh
Why parallel? On a 10 GB database, this reduces migration time from hours to minutes.
Key takeaway: Monitor the migration process. If it stalls, check disk space and PostgreSQL logs:
tail -f /var/lib/pgsql/data/log/postgresql.log
Step 6: Validate Data Integrity
This is the most critical step. Don’t skip this or you’ll discover data problems after production goes live.
Check 1: Row Count Comparison
MySQL row counts:
mysql -u root -p your_database -e "
SELECT table_name, table_rows
FROM information_schema.tables
WHERE table_schema = 'your_database'
ORDER BY table_rows DESC;"
PostgreSQL row counts:
psql -h localhost -U migration_user -d your_database -c "
SELECT schemaname, tablename, n_live_tup
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC;"
Compare the numbers. They must match exactly.
If row counts differ, find the problem:
mysql -u root -p your_database -e "SELECT COUNT(*) FROM critical_table;"
psql -h localhost -U migration_user -d your_database -c "SELECT COUNT(*) FROM critical_table;"
Check 2: Data Sampling (Random Rows)
MySQL:
mysql -u root -p your_database -e "SELECT * FROM orders ORDER BY RAND() LIMIT 10;"
PostgreSQL:
psql -h localhost -U migration_user -d your_database -c "SELECT * FROM orders ORDER BY RANDOM() LIMIT 10;"
Compare the data and look for:
- Missing columns
- Incorrect data types
- Unexpected NULL values
- Text encoding issues
Check 3: Constraint Validation
psql -h localhost -U migration_user -d your_database -c "
SELECT constraint_name, table_name
FROM information_schema.table_constraints
WHERE constraint_type = 'FOREIGN KEY';"
psql -h localhost -U migration_user -d your_database << 'EOF'
BEGIN;
INSERT INTO orders (user_id, amount) VALUES (999999, 100.00);
ROLLBACK;
EOF
Expected result: foreign key violation error.
Check 4: Index Validation
mysql -u root -p your_database -e "SHOW INDEXES FROM your_table;"
psql -h localhost -U migration_user -d your_database -c "\d your_table"
psql -h localhost -U migration_user -d your_database -c "REINDEX DATABASE your_database;"
Check 5: Sequence / Auto-Increment Validation
mysql -u root -p your_database -e "
SELECT AUTO_INCREMENT
FROM information_schema.tables
WHERE table_schema='your_database'
AND table_name='users';"
psql -h localhost -U migration_user -d your_database -c "
SELECT last_value FROM users_id_seq;"
psql -h localhost -U migration_user -d your_database -c "
SELECT setval('users_id_seq', (SELECT MAX(id) FROM users));"
Key takeaway: Run all five validation checks. A single failure can break production. Document every result.
Common Migration Mistakes
Mistake 1: Skipping the Backup
Problem: You discover data corruption during migration and can’t recover.
Solution:
# Always backup first
mysqldump -u root -p --all-databases > backup_$(date +%Y%m%d_%H%M%S).sql
# Store this file safely, preferably off-server
Mistake 2: Not Testing Schema Conversion
Problem: You migrate millions of rows, then discover your ENUM columns broke during conversion.
Solution:
# Test conversion on a development database first
# Don't import to production without testing
Mistake 3: Forgetting to Reset Sequences
Problem: After migration, new records can’t insert because the sequence hasn’t caught up.
Solution:
psql -h localhost -U migration_user -d your_database << 'EOF'
SELECT setval(pg_get_serial_sequence(tablename, colname),
(SELECT MAX(id) FROM tablename) + 1)
FROM pg_tables t
JOIN information_schema.columns c ON t.tablename = c.table_name
WHERE c.column_default LIKE '%nextval%';
EOF
Mistake 4: Ignoring Time Zone Issues
Problem: Timestamps are off by hours after migration.
Solution:
# Check MySQL timezone
mysql -u root -p -e "SELECT @@global.time_zone, @@session.time_zone;"
# Set PostgreSQL to same timezone
psql -h localhost -U migration_user -d your_database -c "
SET timezone TO 'UTC';" # Or your timezone
Mistake 5: Not Updating Application Connection Strings
Problem: Application still points to MySQL after migration.
Solution:
# Old MySQL connection
MYSQL_HOST=localhost
MYSQL_USER=root
MYSQL_PASS=password
MYSQL_DB=your_database
# New PostgreSQL connection
POSTGRES_HOST=localhost
POSTGRES_USER=migration_user
POSTGRES_PASS=password
POSTGRES_DB=your_database
POSTGRES_PORT=5432
Update your application configuration before pointing to PostgreSQL.
Mistake 6: Running Full Migration Without Testing Rollback
Problem: Migration fails halfway and you can’t easily rollback.
Solution:
# Test on a backup database first
# Have a rollback plan documented
# Backup recent data before starting production migration
# Know your recovery commands
Mistake 7: Not Checking PostgreSQL Logs
Problem: Silent data loss during migration goes unnoticed.
Solution:
# Monitor PostgreSQL logs during migration
tail -f /var/lib/pgsql/data/log/postgresql.log
# Look for ERROR and WARNING entries
grep -i "error\|warning" /var/lib/pgsql/data/log/postgresql.log | tail -20
Troubleshooting
Issue: "ERROR: syntax error in schema file"
Solution:
# The schema still has MySQL syntax
sed -i '/ENGINE=/d' schema_only.sql
sed -i '/DEFAULT CHARSET/d' schema_only.sql
sed -i '/AUTO_INCREMENT/d' schema_only.sql
# Then try importing again
psql -h localhost -U migration_user -d your_database -f schema_only.sql
Issue: "ERROR: column does not exist"
Solution:
# Check original table definition
mysql -u root -p your_database -e "SHOW CREATE TABLE problem_table\G"
# Wrap problematic column names in double quotes in PostgreSQL
Issue: "FATAL: Ident authentication failed"
Solution:
sudo nano /var/lib/pgsql/data/pg_hba.conf
# Change:
# local all all ident
# To:
# local all all md5
sudo systemctl reload postgresql
Issue: "Disk full" during migration
Solution:
df -h
sudo rm -rf /var/log/postgresql/*.log.*
sudo dnf clean all
psql -h localhost -U migration_user -c "SHOW data_directory;"
Issue: Row counts match but data feels wrong
Solution:
mysql -u root -p your_database -e "SELECT * FROM users LIMIT 5;"
psql -h localhost -U migration_user -d your_database -c "SELECT * FROM users LIMIT 5;"
psql -h localhost -U migration_user -d your_database -c "
SELECT COUNT(*) FROM users WHERE email LIKE '%?%';"
If encoding issues are found, re-migrate with the correct character set
(for example, --default-character-set=utf8mb4).
Post-Migration Steps
Your data is now in PostgreSQL. Complete these critical steps before going live.
Step 1: Update Application Connection Strings
Example for Node.js:
// Old MySQL connection
const mysql = require('mysql');
const connection = mysql.createConnection({
host: 'localhost',
user: 'root',
password: 'password',
database: 'your_database'
});
// New PostgreSQL connection
const { Pool } = require('pg');
const pool = new Pool({
host: 'localhost',
user: 'migration_user',
password: 'password',
database: 'your_database',
port: 5432
});
Test the new connection:
psql -h localhost -U migration_user -d your_database -c "SELECT COUNT(*) FROM users;"
Step 2: Run Application Test Suite
Test all critical workflows:
- User authentication (if passwords migrated)
- Data queries
- Insert, update, and delete operations
- Complex reports or analytics queries
Step 3: Set Up Automated Backups
PostgreSQL backups differ from MySQL. Configure both logical and physical backups:
# Logical backup (human-readable)
pg_dump -U migration_user -d your_database > backup_$(date +%Y%m%d).sql
# Binary backup (faster, larger)
pg_basebackup -U migration_user -D /backups/pg_backup -Ft -z
# Schedule daily backups
(crontab -l 2>/dev/null; echo "0 2 * * * pg_dump -U migration_user -d your_database > /backups/backup_\$(date +\%Y\%m\%d).sql") | crontab -
Step 4: Monitor Performance
PostgreSQL has different performance characteristics than MySQL:
# Check active queries
psql -h localhost -U migration_user -d your_database -c "
SELECT pid, query, query_start
FROM pg_stat_activity
WHERE state = 'active';"
# Identify slow queries
psql -h localhost -U migration_user -d your_database -c "
SELECT query, mean_exec_time, calls
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;"
Step 5: Create Read-Only User for Reporting
Protect production data by restricting reporting access:
psql -h localhost -U migration_user -d your_database << 'EOF'
CREATE USER readonly_user WITH PASSWORD 'read_only_password';
GRANT CONNECT ON DATABASE your_database TO readonly_user;
GRANT USAGE ON SCHEMA public TO readonly_user;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly_user;
EOF
Quick Reference
Migration Checklist
- Backup MySQL database
- Document table sizes and row counts
- Install PostgreSQL
- Export MySQL schema
- Convert schema to PostgreSQL syntax
- Test schema import on development database
- Export MySQL data
- Import data into PostgreSQL
- Validate row counts
- Verify constraints and indexes
- Reset sequences and auto-increment values
- Update application connection strings
- Run application tests
- Configure PostgreSQL backups
- Monitor database performance
- Create read-only reporting users
Essential Commands
MySQL Export:
mysqldump -u root -p your_database > full_export.sql
mysqldump -u root -p --no-data your_database > schema_only.sql
mysqldump -u root -p --no-create-info your_database > data_only.sql
PostgreSQL Import:
psql -h localhost -U user -d database -f export.sql
psql -h localhost -U user -d database < export.sql
Data Validation:
mysql -u root -p database -e "SELECT COUNT(*) FROM table;"
psql -U user -d database -c "SELECT COUNT(*) FROM table;"
mysql -u root -p database -e "SELECT * FROM table ORDER BY RAND() LIMIT 10;"
psql -U user -d database -c "SELECT * FROM table ORDER BY RANDOM() LIMIT 10;"
Sequence Reset:
psql -U user -d database -c "
SELECT setval('table_id_seq', (SELECT MAX(id) FROM table));"
Next Steps
If you need assistance planning or executing a production-grade database migration, or want expert help tuning PostgreSQL after migration, reach out through our contact page for professional database services.
Conclusion
Migrating from MySQL to PostgreSQL is reliable and repeatable when done methodically. The keys are testing before production, validating data thoroughly, and resisting the urge to rush.
- Always back up first — your safety net
- Test schema conversion early — catch issues before data moves
- Validate everything — row counts, constraints, and sequences
Your PostgreSQL database now unlocks advanced capabilities such as JSONB, window functions, and stronger transactional guarantees. Migration is complete — optimization and growth start now.
Need professional help with your database migration or infrastructure? Our team at Technoroots Limited specializes in enterprise database solutions.
Contact us at Technoroots for expert guidance on MySQL migrations, PostgreSQL optimization, Oracle Database setup, or any database infrastructure challenges.