Generally don't do this I think is my answer. I use ntfs all over, passed, in a qcow, all in vm's and have a handful of gotchas I try to avoid, this is one. Resizing in a vm with native software will work better. A few of these issues seem to stem from the meta data held in the ntfs system areas. I'm not saying I know exactly, but most the tools I've tried to manipulate ntfs images that have ran under a windows os, usually get borked. If created under debian and never initialized in a windows OS seems to be different? Anyway, I once did do a resize mounting the image as a loop device, then gparted resize expand, and then run under the vm. It seemed fine after a disk check in windows and I ran it for a few months. Then I noticed it seemed a few gigs larger than it should be. Those gigs were 'bad sectors" recorded in the meta data and no process I tried could reclaim the space. On shrinks it seems there is meta info that is left that points beyond the partition limits, so a disk check may fix it. Or if you can use a native windows app to pack the data at the beginning, so all meta data is correct, maybe even resize the partition with open space at the end, then truncate when mounted as a loop device something like:
- Code: Select all
truncate --size=$[(End sector+1)*blocksize] newimage.img
Then I should pay attention! Since this is a real disk, use windows to condense and leave unpartitioned space.
Then mount under debian and continue...