| Deutsch English Français Italiano |
|
<66F01194.5030706@grunge.pl> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!weretis.net!feeder9.news.weretis.net!news.nk.ca!rocksolid2!i2pn2.org!.POSTED!not-for-mail From: fir <fir@grunge.pl> Newsgroups: comp.lang.c Subject: Re: program to remove duplicates Date: Sun, 22 Sep 2024 14:46:12 +0200 Organization: i2pn2 (i2pn.org) Message-ID: <66F01194.5030706@grunge.pl> References: <ecb505e80df00f96c99d813c534177115f3d2b15@i2pn2.org> <vcnfbi$1ocq6$1@dont-email.me> <8630bec343aec589a6cdc42bb19dae28120ceabf@i2pn2.org> <vcnu3p$1vkui$2@dont-email.me> <66EF8293.30803@grunge.pl> <vcoh04$24ioi$1@dont-email.me> <66EFF046.8010709@grunge.pl> <vcos2o$264lk$1@dont-email.me> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Injection-Info: i2pn2.org; logging-data="3000139"; mail-complaints-to="usenet@i2pn2.org"; posting-account="+ydHcGjgSeBt3Wz3WTfKefUptpAWaXduqfw5xdfsuS0"; User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:27.0) Gecko/20100101 Firefox/27.0 SeaMonkey/2.24 To: Bart <bc@freeuk.com> In-Reply-To: <vcos2o$264lk$1@dont-email.me> X-Spam-Checker-Version: SpamAssassin 4.0.0 Bytes: 2649 Lines: 34 Bart wrote: > On 22/09/2024 11:24, fir wrote: >> Paul wrote: > >>> The normal way to do this, is do a hash check on the >>> files and compare the hash. You can use MD5SUM, SHA1SUM, SHA256SUM, >>> as a means to compare two files. If you want to be picky about >>> it, stick with SHA256SUM. > > >> the code i posted work ok, and if someone has windows and mingw/tdm >> may compiel it and check the application if wants >> >> hashing is not necessary imo though probably could speed things up - >> im not strongly convinced that the probablility of misteke in this >> hashing is strictly zero (as i dont ever used this and would need to >> produce my own hashing probably).. probably its mathematically proven >> ists almost zero but as for now at least it is more interesting for me >> if the cde i posted is ok > > I was going to post similar ideas (doing a linear pass working out > checksums for each file, sorting the list by checksum and size, then > candidates for a byte-by-byte comparison, if you want to do that, will > be grouped together). > > But if you're going to reject everyone's suggestions in favour of your > own already working solution, then I wonder why you bothered posting. > > (I didn't post after all because I knew it would be futile.) > > i wanta discus nt to do enything that is mentioned .. it is hard to understand? so i may read on options but literally got no time to implement even good idead - thsi program i wrote showed to work and im now using it