Scheduled Maintenance: We are aware of an issue with Google, AOL, and Yahoo services as email providers which are blocking new registrations. We are trying to fix the issue and we have several internal and external support tickets in process to resolve the issue. Please see: viewtopic.php?t=158230

 

 

 

virsh (libvirt) snapshot ... bugged?

Here you can discuss every aspect of Debian. Note: not for support requests!
Post Reply
Message
Author
rsi
Posts: 21
Joined: 2014-10-20 00:52

virsh (libvirt) snapshot ... bugged?

#1 Post by rsi »

Hello,

after I had massive problems with LVM2 and caching and nobody could help with the problem, I switched from LVM vm devices to qemu qcow2.

But now, I've a problem with snaptshots and backup of running vm devices.
I try use virsh (libvirt) to make snapshots and backups of (running) vm devices. And this is, how is should be work...

Code: Select all

root@vmm:~# virsh domblklist bifroest
Target     Source
------------------------------------------------
sda        /opt/VMs/Storage/bifroest.qcow2

root@vmm:~# virsh snapshot-create-as bifroest --diskspec sda,file=/opt/VMs/Storage/SnapShot_bifroest.qcow2 --no-metadata --disk-only --quiesce --atomic
Domain snapshot 1530293277 created
root@vmm:~# virsh domblklist bifroest
Target     Source
------------------------------------------------
sda        /opt/VMs/Storage/SnapShot_bifroest.qcow2

root@vmm:~# rsync -avhPSW /opt/VMs/Storage/bifroest.qcow2 /opt/backup/VMs/bifroest/bifroest.qcow2
sending incremental file list
bifroest.qcow2
         27.62G 100%   61.37MB/s    0:07:09 (xfr#1, to-chk=0/1)

sent 27.63G bytes  received 35 bytes  64.33M bytes/sec
total size is 27.62G  speedup is 1.00
...but...

Code: Select all

root@vmm:~# virsh domblklist bifroest && echo "Show me my snapshot..." && virsh snapshot-list bifroest
Target     Source
------------------------------------------------
sda        /opt/VMs/Storage/SnapShot_bifroest.qcow2

Show me my snapshot...
 Name                 Creation Time             State
------------------------------------------------------------
Bug no. 1 ...
'virsh snapshot-list' is not working!

Code: Select all

root@vmm:~# virsh domblklist bifroest
Target     Source
------------------------------------------------
sda        /opt/VMs/Storage/SnapShot_bifroest.qcow2

root@vmm:~# virsh snapshot-info --domain bifroest --snapshotname /opt/VMs/Storage/SnapShot_bifroest.qcow2
error: Domain snapshot not found: no domain snapshot with matching name '/opt/VMs/Storage/SnapShot_bifroest.qcow2'
error: Domain snapshot not found: no domain snapshot with matching name '/opt/VMs/Storage/SnapShot_bifroest.qcow2'

root@vmm:~# virsh snapshot-info --domain bifroest --snapshotname SnapShot_bifroest.qcow2
error: Domain snapshot not found: no domain snapshot with matching name 'SnapShot_bifroest.qcow2'
error: Domain snapshot not found: no domain snapshot with matching name 'SnapShot_bifroest.qcow2'

root@vmm:~# virsh snapshot-info --domain bifroest --current
error: Domain snapshot not found: the domain does not have a current snapshot
error: Domain snapshot not found: the domain does not have a current snapshot

root@vmm:~# virsh snapshot-delete --domain bifroest --current --children
error: Domain snapshot not found: the domain does not have a current snapshot
error: Domain snapshot not found: the domain does not have a current snapshot
Bug no 2 ...
'virsh snapshot-subfunctions' are not working!

And another bug...

Code: Select all

root@vmm:~# virsh blockcommit bifroest sda --verbose --pivot --delete
error: unsupported flags (0x2) in function qemuDomainBlockCommit

root@vmm:~# virsh blockcommit bifroest sda --verbose --pivot
Block commit: [100 %]
Successfully pivoted
root@vmm:~# virsh domblklist bifroest
Target     Source
------------------------------------------------
sda        /opt/VMs/Storage/bifroest.qcow2

root@vmm:~# lf /opt/VMs/Storage/*bifroest*
 26G -rw-r--r-- 1 root root  26G Jun 29 20:06 /opt/VMs/Storage/bifroest.qcow2
140M -rw------- 1 root root 140M Jun 29 19:51 /opt/VMs/Storage/SnapShot_bifroest.qcow2

root@vmm:~# virsh help blockcommit
  NAME
    blockcommit - Start a block commit operation.

  SYNOPSIS
    blockcommit <domain> <path> [--bandwidth <number>] [--base <string>] [--shallow] [--top <string>] [--active] [--delete] [--wait] [--verbose] [--timeout <number>] [--pivot] [--keep-overlay] [--async] [--keep-relative] [--bytes]

  DESCRIPTION
    Commit changes from a snapshot down to its backing image.

  OPTIONS
    [--domain] <string>  domain name, id or uuid
    [--path] <string>  fully-qualified path of disk
    --bandwidth <number>  bandwidth limit in MiB/s
    --base <string>  path of base file to commit into (default bottom of chain)
    --shallow        use backing file of top as base
    --top <string>   path of top file to commit from (default top of chain)
    --active         trigger two-stage active commit of top file
    --delete         delete files that were successfully committed
    --wait           wait for job to complete (with --active, wait for job to sync)
    --verbose        with --wait, display the progress
    --timeout <number>  implies --wait, abort if copy exceeds timeout (in seconds)
    --pivot          implies --active --wait, pivot when commit is synced
    --keep-overlay   implies --active --wait, quit when commit is synced
    --async          with --wait, don't wait for cancel to finish
    --keep-relative  keep the backing chain relatively referenced
    --bytes          the bandwidth limit is in bytes/s rather than MiB/s
So, the "delete" option doese not work! I must remove the snapshot manual...

And yet another bug!

Code: Select all

root@vmm:~# virsh domblklist freya
Target     Source
------------------------------------------------
sda        /opt/VMs/Storage/freya.qcow2
sdb        /dev/zvol/zfs0/smb

root@vmm:~# virsh snapshot-create-as freya --diskspec sda,file=/opt/VMs/Storage/SnapShot_freya.qcow2 --no-metadata --disk-only --quiesce --atomic
error: unsupported configuration: source for disk 'sdb' is not a regular file; refusing to generate external snapshot name
'virsh snaptshot' can't handle it, if there a second device and I like only to snapshot the first device (sda).

The whole snapshot in virsh is bugged!
So, back to LVM2 without caching. Snapshots and backups will work with it.
Since I use zfs4linux in the rest of the system, I tried it there with zfs volume in the vm, but with zfs the access is extremely slow with vm devices. Even with caching (SSD).

Every comment and help is very welcome!

Edit:
virsh version...

Code: Select all

Virsh command line tool of libvirt 4.0.0
See web site at https://libvirt.org/

Compiled with support for:
 Hypervisors: QEMU/KVM LXC UML Xen LibXL OpenVZ VMware VirtualBox ESX Test
 Networking: Remote Network Bridging Interface netcf Nwfilter VirtualPort
 Storage: Dir Disk Filesystem SCSI Multipath iSCSI LVM RBD Sheepdog Gluster ZFS
 Miscellaneous: Daemon Nodedev AppArmor Secrets Debug DTrace Readline

CwF
Global Moderator
Global Moderator
Posts: 2638
Joined: 2018-06-20 15:16
Location: Colorado
Has thanked: 41 times
Been thanked: 192 times

Re: virsh (libvirt) snapshot ... bugged?

#2 Post by CwF »

At risk of being no help at all, I'll ask why bother? For my uses I haven't found snapshots uniquely useful. I use qcow layers on backing files, blockpull to a new backing when sure, manually copy backing files along the way just in case... I just haven't came up with a case where I wish I had a "snapshot". Really I'm simply making my own without added complexity?

CwF
Global Moderator
Global Moderator
Posts: 2638
Joined: 2018-06-20 15:16
Location: Colorado
Has thanked: 41 times
Been thanked: 192 times

Re: virsh (libvirt) snapshot ... bugged?

#3 Post by CwF »

...and on the storage, are you using fancy file system in a qcow file? Why? I'd say play with that on a passed disk, give the vm its own unencumbered block device.

rsi
Posts: 21
Joined: 2014-10-20 00:52

Re: virsh (libvirt) snapshot ... bugged?

#4 Post by rsi »

I used LVM2 (raw volumes) for the vms for a long time, because snapshot->backup also worked very well.
Then I added an SSD as caching to LVM2 and after a system failure the data could not be recovered (see my other thread here in the forum). Luckily I had backups to almost everything.
But after that I wanted to try ZFS4Linux and removed LVM2. But that was too slow for me in the vms. So I made a mixture of ZFS and QCOW2. But even this is not really a good solution.
Important to me is the possibility to back up during the running operation of the vms. I have 4 permanently running vms.

CwF
Global Moderator
Global Moderator
Posts: 2638
Joined: 2018-06-20 15:16
Location: Colorado
Has thanked: 41 times
Been thanked: 192 times

Re: virsh (libvirt) snapshot ... bugged?

#5 Post by CwF »

Ok, I appreciate the how-to push forward. I agree it would be nice to have and rely on online+inline backup but my KISS enforcer kicked in. I have tested many configs for my system and blown up many. I found that host caching anything off the host disk was a bad idea. I also gave up on individual vm backup. A 24/7/365 requirement in my mind spells redundant hardware of some kind, a fall over. By segregating user data from the OS's I'm now comfortable with my method. With all static backing files on the host disk, and a host that has few foreground activities, that full encrypted LVM SSD doesn't change much. I take the time to shut it down and image that disk to backup only before an upgrade, and all upgrades are manual and only happens a few times a year. That is a 45-60 GB file and includes all VM's. Depending on the vm needs, its top 'runtime' qcow2 layer can be on another disk for a speed boost. The user data is all another disk, natively mounted in a vm for near native speed and served up to all vms, and that vm has it's own vfio eth port. For incremental I pass around a usb drive, for sensitive data I use a vm with a host disk qcow that I do copy manually to the same docked hard disk that holds another layer of everything. Upon catastrophe I have 3+ layers to rebuild from. I'll keep an eye out for your automagical solution, I couldn't find it. I typically have 4-6 vms going and a few hardware assisted, a few internet enabled, a few intranet isolated, one mixed.

User avatar
debiman
Posts: 3063
Joined: 2013-03-12 07:18

Re: virsh (libvirt) snapshot ... bugged?

#6 Post by debiman »

CwF, would it hurt to use the enter/return key more often?
this is hard to look at, let alone read.
no offense.

CwF
Global Moderator
Global Moderator
Posts: 2638
Joined: 2018-06-20 15:16
Location: Colorado
Has thanked: 41 times
Been thanked: 192 times

Re: virsh (libvirt) snapshot ... bugged?

#7 Post by CwF »

I did stream there, sorry. But then I do hate scrolling through white space.

rsi
Posts: 21
Joined: 2014-10-20 00:52

Re: virsh (libvirt) snapshot ... bugged?

#8 Post by rsi »

@CwF
You're probably right about the possibilities. I've tried a lot in the meantime. From hardware and software raids to various backup options. So far I haven't really found a good solution for SoHo.
I have a vm on the internet with different services. And the remaining three on the intranet (SoHo, e.g. file server...). I don't want to shut them down too often, but after negative experiences without min. weekly backups, I'm still looking for the ideal solution.
And I have tried many different possibilities, including my own scripts (e.g. in connection with LVM).

CwF
Global Moderator
Global Moderator
Posts: 2638
Joined: 2018-06-20 15:16
Location: Colorado
Has thanked: 41 times
Been thanked: 192 times

Re: virsh (libvirt) snapshot ... bugged?

#9 Post by CwF »

Maybe simply add another vm with the only function to collect your user data and back it up to wherever..

Following this format:
1. Hypervisor = simply host, does not manipulate data, does not store data.
2. Program VM's = manipulators and creators of data, does not store data.
3. Storage VM's = Stores data.
3a. Primary file server = stores data
3b. Backup file server = backs up data to alternate media

If I'm thinking right, 3b. is what you don't have? With this extra vm, 2's and 3a would meet your desire to never need to be shut down. 3b. does not need to run all the time. This could spin up daily or weekly, auto or manual, with or without a gui, collecting to a docked disk maybe.

2. The program vms can condensed and expanded as needed.
2a. Common VM primary backing file qcow2 that resides on the host.
2b. Difference layer = renamed hostname, mailname, and some other changes, may also reside on host.
2c. Runtime layer = where all writes happen, extra ongoing user configs, disposable layer, on host or separate linked disk.
2d. Scratch = uniquely assigned block device if needed, not a layer

My example here are my browser vms. A common OS install with Firefox, primary backing file is 1.1 GB. Here I combine 2b and 2c with a host disk runtime, uniquely identified as the vm name. I have 5 unique vms and the runtimes grow to a few GB's before a refresh. One can use a 2b with bookmarks, passwords, specific site cookies or other steady data added and the layer might be 50+MB. Then the runtime layer collects junk in use for a few months and at any time you scrap that top 2c layer, create a new one, and your off fresh.

My non-browser debian VMs also start with that common 1.1GB backing file. Their 2b's are a few hundred MB, one is over a gig. These do use a 2c runtime layer to be scrapped if something happens. One is hardware assisted.

So, 5+ VM's with a host impact of 5GB.

3. This class could be on the same backing file again! In a similar way, it's second layer qcow would add whatever needed to accomplish the file server role. Both the primary and secondary files servers could share this layer also (100's MB), the renaming, or differentiation is on layer three (50MB), domain name etc. Layer 4 could be the runtime (disposable layer) and likely you'd have unique block devices attached (2d).

Note that I've been up to 5 qcow2 layers and notice no real slowdown except perhaps a slower initial startup. I'd try not have a write layer on a second host controlled disk, but invert that and put all read only layers on the secondary host disk. This seems to survive yanking the plug...

With this segregation backups should be easier! Of course I failed to get the details of what programs and what type of data you use and have left out a zillion details...but maybe this blurb is more clear!

Post Reply