Moving a large MySql database(Over 15 GB) from server to server mysqldump

by adjc98   Last Updated September 11, 2019 17:06 PM

I need to move a mysql database that is about 17 GB that is on a Cpanel/Centos based server that I do not have root access, to a Centos server that I have full root access. I do have limited SSH on the Cpanel with mysqldump available(I think).

When doing a test database download from the Cpanel Mysql backup utility, the server ran out of ram and shut down. I only did about 250 MB as a test for the download.

My choices seem to be:

Master/Slave replication(I don't have root so that seems out) Rsync the database files(I don't have root so that seems out) Mysqldump PhpMyadmin(Crashes the server when the tables have too many rows)

At this point I am planning on doing a mysqldump from the Cpanel server. I don't want to have it locked down for a super long time, since our website is live. I saw that I can use the mysqldump with the -r and it will cause the buffer to go row by row instead of loading the whole table into the buffer. Some of the tables have 2 million rows. Most are under 50 thousand rows.

What is the best way to move the database without locking down the system or causing the server to crash because of the ram being overloaded?

Both servers have powerful CPU's and 16 GB of ram with fairly current MySql versions.

Would you recommend a table by table dump or is that even necessary?

I might be able to get the current Cpanel host to setup my.cnf for me to set up the master slave connection(Not sure about that one though)

Answers 1

What you need to do is mysqldump separate tables in batches and load them.

On your new CentOS server, write the following



SQL="SELECT table_name FROM information_schema.tables"
SQL="${SQL} WHERE table_schema = 'xcart'"
SQL="${SQL} ORDER BY data_length"
mysql ${SRC_CONN} -ANe"${SQL}" > /tmp/ListOfTables.txt

MYSQLDUMP_OPTIONS="--hex-blob --routines --triggers --events"
for TBL in `cat /tmp/ListOfTables.txt`
    mysqldump ${SRC_CONN} ${MYSQLDUMP_OPTIONS} xcart ${TBL} | mysql ${TGT_CONN} -Dxcart &
    (( COMMIT_COUNT++ ))
    if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
if [ ${COMMIT_COUNT} -gt 0 ] ; then wait ; fi

I have other ideas from my old post : How can I optimize a mysqldump of a large database?

Give it a Try !!!

October 25, 2014 23:25 PM

Related Questions

Updated August 16, 2019 15:06 PM

Updated August 09, 2017 09:06 AM

Updated April 12, 2016 08:02 AM

Updated August 17, 2017 09:06 AM

Updated April 26, 2015 21:02 PM