ZxMig vs ZxBackup
Page 1 of 8 123 ... LastLast
Results 1 to 10 of 80

Thread: ZxMig vs ZxBackup

  1. #1
    Senior Member
    Join Date
    Apr 2012
    Posts
    97

    ZxMig vs ZxBackup

    We are preparing to migrate our Zimbra mailserver to a new Zimbra mailserver
    and did a test using ZxMig and a data ferry (usb hdd) but this is rather slow.
    I've read in ths forum about using ZxBackup to ZxBackup which is supposed to be much faster.
    What would be the best approach to migrate and reduce downtime to a minimum ?
    Please keep in mind that the total size of our mailboxes is ~780GB
    Thanks,
    Perry

  2. #2
    ZeXtras Community Manager ZeXtras Employee Cine's Avatar
    Join Date
    Apr 2011
    Posts
    2,342
    Hello ppdeonet!

    Using ZeXtras Backup to export your data from the source server is not faster in any way

    The only difference between a ZeXtras Migration Tool export and a ZeXtras Backup export is that using ZeXtras Backup on both server allows to perform an Incremental Migration, thus reducing the total downtime of the server along with the impact on the users during the migration process. An example of Incremental migration can be found HERE.

    Also, the incremental migration doesn't need a Ferry Store (a usb HDD is a pretty slow storage), if the two servers are on the same LAN or on a fast connection I'd suggest to move the data using rsync.


    Feel free to keep us up to date about your migration process and to ask any kind of question you may find useful!


    Have a nice day,
    Cine

    P.s.: The ZeXtras Migration Tool Guide has been recently updated, be sure to read THIS paragraph, suggests three very important checks to be ran in order to optimize the migration process
    IT Support Team Contact Form
    Sales Team Contact Form

    ZeXtras Website
    # ZeXtras Wiki # ZeXtras Store

    Have ZeXtras Suite or ZeXtras Migration Tool been helpful to you?
    Share your experience in the Zimbra Gallery!

    ZeXtras Suite on the Zimbra Gallery
    ZeXtras Migration Tool on the Zimbra Gallery

  3. #3
    Senior Member
    Join Date
    Apr 2012
    Posts
    97
    Thanx Cine
    Is it possible to migrate with ZxMig first and do an incremental using ZxBackup after that ?

  4. #4
    CTO ZeXtras Employee d0s0n's Avatar
    Join Date
    Apr 2011
    Posts
    565
    Hi ppdeonet,
    the ZxMig export folder and ZxBackup folder have same format.
    So you can use the export folder as backup path for a FullScan of your zimbra system and then you can use rsync to synchronize the folder on the destination system as Cine suggest you.

    D0s0n
    ZeXtras Website # ZeXtras Wiki # ZeXtras Store

    Head of ZeXtras System Administrators

  5. #5
    Senior Member
    Join Date
    Apr 2012
    Posts
    97
    Thanx d0s0n
    The test migration I've started yesterday (ZxMig to Data Ferry (usb hdd)
    is running for 19 hours now and still only 1.8 GB in the export folder.
    Is ZxBackup a lot faster ? How about the CLI ?
    Because disk space is limited and we can't blow 780 GB over the network I have to use the data ferry
    for the bulk migration. Also migrating domain by domain doesn't help much because 95% of the mail is in
    1 domain.

  6. #6
    Wise Guy Participant
    Join Date
    Apr 2011
    Posts
    34
    Hello ppdeonet.
    19 hours for 1.8GB is far too high a time to be anywhere near the expected behavior, even accounting for poor i/o performance; you might be using a USB 1.x device or perhaps there might be a problem with your system.
    If you are willing to share more information about your eviroment, we would be happy to provide you with help.
    For instance the following would be helpful:
    - Hardware details (even if the machine is running inside a virtualized environment)
    - Storage details: how many disks, raid level used, filesystems, mount options, etc
    - Some performance statistics: "iostat -xm 1", "vmstat 1", "free -m", top, etc

    Trantor
    ZeXtras Website # ZeXtras Wiki # ZeXtras Store

    ZeXtras System Administrator and Installer Guru

  7. #7
    Senior Member
    Join Date
    Apr 2012
    Posts
    97
    There was a problem with the data ferry.
    The syslog was filled with
    usb 1-1: reset full speed USB device using uhci_hcd and address 2
    Unmounted the ferry, powered it off and on and connected it to a different usb port.
    Will start another trial run in a couple of minutes.

  8. #8
    Senior Member
    Join Date
    Apr 2012
    Posts
    97
    Even with usb hdd properly connected zxmig is extremely slow.
    test drive to internal sata hdd is faster (~1GB/15 minutes)
    hardware:
    1 2TB sata hdd and 1 2TB sata hdd external thru usb

    CPU:f
    69: None 03.0: 10103 CPU
    [Created at cpu.304]
    Unique ID: 4zLr.j8NaKXDZtZ6
    Hardware Class: cpu
    Arch: X86-64
    Vendor: "GenuineIntel"
    Model: 6.15.6 "Intel(R) Xeon(R) CPU 5130 @ 2.00GHz"
    Features: fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,p ge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,ss e,sse2,ss,ht,tm,pbe,syscall,nx,lm,constant_ts
    c,arch_perfmon,pebs,bts,rep_good,pni,monitor,ds_cp l,vmx,tm2,ssse3,cx16,xtpr,dca,lahf_lm
    Clock: 1995 MHz
    BogoMips: 3990.07
    Cache: 4096 kb
    Units/Processor: 2
    Config Status: cfg=new, avail=yes, need=no, active=unknown

    root@deolinux1:/opt/zimbra/store/export# iostat -xm 1
    Linux 2.6.24-28-server (deolinux1) 05/16/2012

    avg-cpu: %user %nice %system %iowait %steal %idle
    3.41 0.00 0.47 1.87 0.00 94.24

    Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
    sda 0.35 51.13 1.30 18.31 0.07 0.39 47.29 0.92 46.90 1.85 3.62
    dm-0 0.00 0.00 0.10 1.17 0.00 0.00 8.16 0.02 17.22 2.82 0.36
    dm-1 0.00 0.00 0.01 0.83 0.00 0.00 11.11 0.01 14.33 3.95 0.33
    dm-2 0.00 0.00 0.79 66.00 0.01 0.31 9.79 3.34 49.98 0.47 3.13
    dm-3 0.00 0.00 0.75 1.48 0.06 0.06 113.53 0.21 95.23 4.17 0.93
    sdb 0.00 0.01 0.00 0.97 0.00 0.00 9.91 0.01 6.79 6.38 0.62

    root@deolinux1:/opt/zimbra/store/export# vmstat 1
    procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
    r b swpd free buff cache si so bi bo in cs us sy id wa
    0 0 48 252752 297700 22238040 0 0 17 120 17 12 3 0 94 2
    0 0 48 252920 297700 22238060 0 0 4 60 172 7144 1 1 97 0
    0 0 48 252920 297700 22238068 0 0 0 0 158 8761 2 1 97 0
    3 0 48 254284 297700 22236600 0 0 12 61 61 8050 3 0 96 0
    1 0 48 253912 297704 22236688 0 0 0 1901 364 15943 20 1 68 10
    1 0 48 249988 297704 22240516 0 0 4 45 60 8570 25 1 74 0
    1 0 48 232372 297716 22257792 0 0 12 19428 342 10177 19 4 73 5
    0 0 48 224808 297728 22265392 0 0 4 8 9 12339 26 3 71 0

    root@deolinux1:/opt/zimbra/store/export# free -m
    total used free shared buffers cached
    Mem: 32223 32004 219 0 291 21733
    -/+ buffers/cache: 9979 22244
    Swap: 3812 0 3812

  9. #9
    Wise Guy Participant
    Join Date
    Apr 2011
    Posts
    34
    Quote Originally Posted by ppdeonet View Post
    CPU:....
    The export process is not cpu-bound, therefore the CPU(s) is nothing to worry about.

    Quote Originally Posted by ppdeonet View Post
    root@deolinux1:/opt/zimbra/store/export# iostat -xm 1
    Linux 2.6.24-28-server (deolinux1) 05/16/2012

    Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
    sda 0.35 51.13 1.30 18.31 0.07 0.39 47.29 0.92 46.90 1.85 3.62
    dm-0 0.00 0.00 0.10 1.17 0.00 0.00 8.16 0.02 17.22 2.82 0.36
    dm-1 0.00 0.00 0.01 0.83 0.00 0.00 11.11 0.01 14.33 3.95 0.33
    dm-2 0.00 0.00 0.79 66.00 0.01 0.31 9.79 3.34 49.98 0.47 3.13
    dm-3 0.00 0.00 0.75 1.48 0.06 0.06 113.53 0.21 95.23 4.17 0.93
    sdb 0.00 0.01 0.00 0.97 0.00 0.00 9.91 0.01 6.79 6.38 0.62
    The await values indicate a rather high latency in device access, but from I see from your setup that is pretty much unavoidable.
    The values above were taken during an export with a target residing on the same device of your zimbra installation, correct?
    I ask because the values seem to be far too low for an export situation (btw iostat -xm 1 is a command which collects every second, but the first block of value is the average since the last reboot, therefore you have to provide us with a few of the subsequent blocks, which are actual values provided every second by the sistem).

    Quote Originally Posted by ppdeonet View Post
    root@deolinux1:/opt/zimbra/store/export# vmstat 1
    procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
    r b swpd free buff cache si so bi bo in cs us sy id wa
    0 0 48 252752 297700 22238040 0 0 17 120 17 12 3 0 94 2
    0 0 48 252920 297700 22238060 0 0 4 60 172 7144 1 1 97 0
    0 0 48 252920 297700 22238068 0 0 0 0 158 8761 2 1 97 0
    3 0 48 254284 297700 22236600 0 0 12 61 61 8050 3 0 96 0
    1 0 48 253912 297704 22236688 0 0 0 1901 364 15943 20 1 68 10
    1 0 48 249988 297704 22240516 0 0 4 45 60 8570 25 1 74 0
    1 0 48 232372 297716 22257792 0 0 12 19428 342 10177 19 4 73 5
    0 0 48 224808 297728 22265392 0 0 4 8 9 12339 26 3 71 0
    nothing strange here ...

    Quote Originally Posted by ppdeonet View Post
    root@deolinux1:/opt/zimbra/store/export# free -m
    total used free shared buffers cached
    Mem: 32223 32004 219 0 291 21733
    -/+ buffers/cache: 9979 22244
    Swap: 3812 0 3812
    Total RAM amount and memory occupation are both more than reasonable, nothing amiss there.

    To sum up, the Achilles' heel of your setup seems to be the underlying storage.
    Unless I misunderstood what you wrote, the whole system runs on a single 2TB sata drive, correct? If so, performances are likely to suffer, although I imagine that might be one of the reasons behind your migration.
    Nothing can be done about the source, since the drive latency will slow down the extraction of the data from the running system.
    Something, though, might be possible to ameliorate the performances of the export process, when it comes to the medium where the data will be written.
    A possible improvement might be achieved through the direct connection of a SATA drive to the internal storage controller of your server, although that would almost certainly be quite invasive.
    A previous question was whether your USB drive were a full-speed or a high-speed USB device, which would of course strongly impact the write speed. Another related matter is whether the USB port of the server you're employing is full-speed or high-speed, since some servers do not connect all USB ports to a high-speed usb host controller. High-speed devices should be handled by the ehci_hcd linux kernel module, so it should be possible to infer from your dmesg whether that's the case.
    Reading from your internal drive and writing to an external usb drive should in any case display greater performances than you're experiencing now.
    Check for the above and also see if you can give us more precise data concerning the iostat output during the export.

    Bye for now.

    Trantor
    ZeXtras Website # ZeXtras Wiki # ZeXtras Store

    ZeXtras System Administrator and Installer Guru

  10. #10
    Senior Member
    Join Date
    Apr 2012
    Posts
    97
    Hi Trantor,

    Thanks for looking into this
    After I made sure the external usb hdd is high speed usb (ehci_hcd module) I ran another test run with
    zxmig.
    It ran for over 6 hours and the result was ~5 GB of data.
    - stats -
    new accounts: 6
    accounts updated: 0
    skipped accounts(by COS): 0
    item updated: 0
    new metadata: 16133
    new files: 15155
    checked items: 16137
    backup path: /media/usb0/
    skipped items: 0
    items/sec: 0.7020865
    additional notification mails:
    Exceptions: None

    The Zimbra server is indeed set up with a 2TB sata hdd and another 2TB sata hdd RAID 1 mirror copy.

    Just started a new test run and made a couple of statistical snapshots:


    avg-cpu: %user %nice %system %iowait %steal %idle
    3.34 0.00 0.51 1.82 0.00 94.33

    Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
    sda 0.34 48.71 1.22 17.15 0.06 0.37 48.05 0.82 44.51 1.82 3.34
    dm-0 0.00 0.00 0.08 3.04 0.00 0.01 8.25 0.47 150.82 1.12 0.35
    dm-1 0.00 0.00 0.01 1.06 0.00 0.01 22.96 0.05 43.37 2.91 0.31
    dm-2 0.00 0.00 0.68 60.49 0.01 0.29 9.77 2.91 47.63 0.47 2.85
    dm-3 0.00 0.00 0.72 1.31 0.06 0.06 115.27 0.17 83.49 4.12 0.84
    sdb 0.04 0.47 0.01 46.75 0.00 0.22 9.75 0.03 0.69 0.68 3.19

    avg-cpu: %user %nice %system %iowait %steal %idle
    29.65 0.00 5.41 7.29 0.00 57.65

    Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 456.00 0.00 47.00 0.00 3.59 156.53 0.31 6.60 1.28 6.00
    dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    dm-1 0.00 0.00 0.00 2.00 0.00 0.00 5.00 0.00 0.00 0.00 0.00
    dm-2 0.00 0.00 0.00 486.00 0.00 1.90 8.00 5.43 11.17 0.10 5.00
    dm-3 0.00 0.00 0.00 15.00 0.00 1.69 230.60 0.07 4.67 0.67 1.00
    sdb 0.00 13.00 0.00 1308.00 0.00 6.19 9.69 0.93 0.71 0.71 93.00

    avg-cpu: %user %nice %system %iowait %steal %idle
    20.28 0.00 6.53 8.86 0.00 64.34

    Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 1203.00 5.00 111.00 0.50 5.17 100.16 0.74 6.38 1.38 16.00
    dm-0 0.00 0.00 0.00 2.00 0.00 0.01 8.00 0.02 10.00 5.00 1.00
    dm-1 0.00 0.00 0.00 6.00 0.00 0.03 9.50 0.00 0.00 0.00 0.00
    dm-2 0.00 0.00 1.00 1298.00 0.00 5.07 8.00 10.55 8.12 0.12 15.00
    dm-3 0.00 0.00 4.00 8.00 0.50 0.06 96.17 0.02 1.67 0.83 1.00
    sdb 0.00 15.00 0.00 1326.00 0.00 6.28 9.70 0.85 0.64 0.64 85.00

    avg-cpu: %user %nice %system %iowait %steal %idle
    4.75 0.00 7.36 9.03 0.00 78.86

    Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 134.00 0.00 16.00 0.00 0.59 75.00 0.03 1.88 1.88 3.00
    dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    dm-2 0.00 0.00 0.00 150.00 0.00 0.59 8.00 0.14 0.93 0.20 3.00
    dm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sdb 0.00 13.00 0.00 1363.00 0.00 6.44 9.68 0.87 0.64 0.64 87.00







    root@deolinux1:/media/usb0# vmstat 1
    procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
    r b swpd free buff cache si so bi bo in cs us sy id wa
    0 1 48 5776136 385732 15800232 0 0 16 161 6 2 3 1 94 2
    0 0 48 5775328 385732 15800736 0 0 512 7681 6426 16939 2 0 88 10
    0 0 48 5775208 385732 15801088 0 0 4 6273 6409 16869 2 0 89 9
    0 1 48 5774476 385732 15801716 0 0 512 6049 6337 16702 1 0 91 8
    0 0 48 5774232 385732 15802064 0 0 0 6284 6453 16989 2 1 90 7
    0 0 48 5773444 385732 15802780 0 0 512 6069 6337 16677 1 2 89 9
    1 0 48 5773196 385732 15803036 0 0 0 6412 6483 16936 1 2 87 9
    0 0 48 5772284 385732 15803720 0 0 512 6053 6329 16669 1 9 82 8
    0 0 48 5771680 385732 15804480 0 0 512 6001 6316 16604 1 0 88 10
    0 1 48 5771432 385732 15804696 0 0 0 6135 6545 16985 1 0 90 8
    0 0 48 5770748 385732 15805380 0 0 512 6053 6348 16673 1 0 91 9
    0 0 48 5770572 385732 15805660 0 0 0 6295 6575 17570 1 0 89 10
    0 0 48 5766700 385736 15807832 0 0 512 6457 8054 20562 1 1 90 8
    0 0 48 5765264 385736 15809416 0 0 0 6131 7702 19685 1 2 87 9
    1 0 48 5758408 385768 15816856 0 0 516 16308 6445 17557 26 9 56 8
    0 1 48 5756976 385772 15817912 0 0 512 6577 6242 17561 5 8 80 7


    root@deolinux1:/media/usb0# more /tmp/free.txt
    total used free shared buffers cached
    Mem: 32223 26582 5640 0 376 15427
    -/+ buffers/cache: 10778 21445
    Swap: 3812 0 3812

Page 1 of 8 123 ... LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •