File Size comparing enough to ensure copy ok or we need MD5?

If none of the more specific forums is the right place to ask

Re: File Size comparing enough to ensure copy ok or we need

Postby CwF » 2020-07-14 23:14

debian121212 wrote:As in, have you ever copied something only to realize the MD5/Sha1 or whatever is off right after a successful copy paste operation is over?

No, I haven't had such an issue.
debian121212 wrote:How'd you copy the files?

With thunar. I segregate data. It doesn't all get the same treatment.
CwF
 
Posts: 721
Joined: 2018-06-20 15:16

Re: File Size comparing enough to ensure copy ok or we need

Postby debian121212 » 2020-07-15 00:34

CwF wrote:
debian121212 wrote:As in, have you ever copied something only to realize the MD5/Sha1 or whatever is off right after a successful copy paste operation is over?

No, I haven't had such an issue.
debian121212 wrote:How'd you copy the files?

With thunar. I segregate data. It doesn't all get the same treatment.


How do you address possible future data corruption?

Do you stop data corruption by re copying periodically and changing your cold storage every x amount of years as I am planning to do or do you use net storage? What you use for top tier data?

Do you use above basic consumer level cold storage external hard drives? What would you use for cold storage? By cold storage I mean something like an external hard drive.
User avatar
debian121212
 
Posts: 77
Joined: 2019-01-03 01:34

Re: File Size comparing enough to ensure copy ok or we need

Postby debian121212 » 2020-07-15 02:04

If regular byte checking after a ctrl c and ctrl v copy paste is enough, then why does rsync even use md5?
User avatar
debian121212
 
Posts: 77
Joined: 2019-01-03 01:34

Re: File Size comparing enough to ensure copy ok or we need

Postby CwF » 2020-07-15 02:55

I don't pay attention to what I use as much as the pattern of use. First off, storage is stupid cheap. I once forked over the cash for a 6 disc passive backplane scsi array - now, storage is dirt cheap -buy some!

My OS's are imaged, multiple copies. I image to any spinning disk of the moment. I then write the image to a new/recycled disk. That disk goes into use, the old disk a known good does nothing until I retask it. That happens every year or so, all SSD. Data sets are by size, most exist as qcow2 images GB's in size and live on SSD's with copies on the spinner of the moment. Large sets warrant a device of their own are similar to an OS without a image to file step, there exist the current one in use, and the used last one, maybe the one before that. When moving to a new device I'll usually refresh the prior device, yep two steps back. This is the only time I'd want the bit for bit check. When it passes, the current set is deemed good, and moved to the new device. Then the 2 step old device is retasked after some random gestation period.
Small data, ie a handful of spreadsheets and cherrytree files that benefit from todays backup might get backed to the systems usbdrive and may be intentionally trapped in vm snapshots.

I wish higher end 120GB disk were still common, my OS's will never need more...

In all of that, I've never needed to 'restore'. The point is I don't exactly back up stuff, I move it to the new and put the old on the shelf. The last time I pulled data from a 'shelved' device due to current corruption was back in the IDE days.
CwF
 
Posts: 721
Joined: 2018-06-20 15:16

Previous

Return to General Questions

Who is online

Users browsing this forum: sunrat and 2 guests

fashionable