Deutsch   English   Français   Italiano  
<66F02808.8030404@grunge.pl>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder9.news.weretis.net!i2pn.org!i2pn2.org!.POSTED!not-for-mail
From: fir <fir@grunge.pl>
Newsgroups: comp.lang.c
Subject: Re: program to remove duplicates
Date: Sun, 22 Sep 2024 16:22:00 +0200
Organization: i2pn2 (i2pn.org)
Message-ID: <66F02808.8030404@grunge.pl>
References: <ecb505e80df00f96c99d813c534177115f3d2b15@i2pn2.org> <vcnfbi$1ocq6$1@dont-email.me> <8630bec343aec589a6cdc42bb19dae28120ceabf@i2pn2.org> <vcnu3p$1vkui$2@dont-email.me> <66EF8293.30803@grunge.pl> <vcoh04$24ioi$1@dont-email.me> <66EFF046.8010709@grunge.pl> <vcos2o$264lk$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Info: i2pn2.org;
	logging-data="3010050"; mail-complaints-to="usenet@i2pn2.org";
	posting-account="+ydHcGjgSeBt3Wz3WTfKefUptpAWaXduqfw5xdfsuS0";
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:27.0) Gecko/20100101 Firefox/27.0 SeaMonkey/2.24
To: Bart <bc@freeuk.com>
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <vcos2o$264lk$1@dont-email.me>
Bytes: 3519
Lines: 55

Bart wrote:
> On 22/09/2024 11:24, fir wrote:
>> Paul wrote:
>
>>> The normal way to do this, is do a hash check on the
>>> files and compare the hash. You can use MD5SUM, SHA1SUM, SHA256SUM,
>>> as a means to compare two files. If you want to be picky about
>>> it, stick with SHA256SUM.
>
>
>> the code i posted work ok, and if someone has windows and mingw/tdm
>> may compiel it and check the application if wants
>>
>> hashing is not necessary imo though probably could speed things up -
>> im not strongly convinced that the probablility of misteke in this
>> hashing is strictly zero (as i dont ever used this and would need to
>> produce my own hashing probably).. probably its mathematically proven
>> ists almost zero but as for now at least it is more interesting for me
>> if the cde i posted is ok
>
> I was going to post similar ideas (doing a linear pass working out
> checksums for each file, sorting the list by checksum and size, then
> candidates for a byte-by-byte comparison, if you want to do that, will
> be grouped together).
>
> But if you're going to reject everyone's suggestions in favour of your
> own already working solution, then I wonder why you bothered posting.
>
> (I didn't post after all because I knew it would be futile.)
>
>

yet to say about this efficiency

whan i observe how it work - this program is square in a sense it has
half square loop over the directory files list, so it may be lik 
20x*20k/2-20k comparcions but it only compares mostly sizes so this
kind of being square im not sure how serious is ..200M int comparsions 
is a problem? - mayeb it become to be for larger sets

in the meaning of real binary comparsions is not fully square but
its liek sets of smaller squares on diagonal of this large square
if yu (some) know what i mean... and that may be a problem as
if in that 20k files 100 have same size then it makes about 100x100 full 
loads and 100x100 full binary copmpares byte to byte which
is practically full if there are indeed 100 duplicates
(maybe its less than 100x100 as at first finding of duplicate i mark it 
as dumpicate and ship it in loop then

but indeed it shows practically that in case of folders bigger than 3k
files it slows down probably unproportionally so the optimisation is
in hand /needed for large folders

thats from the observation on it
from disk and