Deutsch   English   Français   Italiano  
<9e314k97pgk1b8bfd1vdqdlo06m1gid8s8@4ax.com>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!Xl.tags.giganews.com!local-2.nntp.ord.giganews.com!news.giganews.com.POSTED!not-for-mail
NNTP-Posting-Date: Wed, 04 Jun 2025 18:54:24 +0000
From: Joe Gwinn <joegwinn@comcast.net>
Newsgroups: sci.electronics.design
Subject: Re: "RESET"
Date: Wed, 04 Jun 2025 14:54:30 -0400
Message-ID: <9e314k97pgk1b8bfd1vdqdlo06m1gid8s8@4ax.com>
References: <100thgs$v8cm$1@dont-email.me> <m9f71lF5gr0U1@mid.individual.net> <jgr63k1c2ddf09mtihlgv7u9u90v6ihfcp@4ax.com> <10159t3$2q2ds$1@dont-email.me> <p3hc3kl6bna8r8renhrdlj2psmrp8h0f7j@4ax.com> <10170ak$38945$1@dont-email.me> <mkbe3kpu0p4dfts5i0ombnlu7ikm6csdnu@4ax.com> <101ckan$i2b3$3@dont-email.me> <elqj3k17artqe5b9inne48ork5gurdp1u7@4ax.com> <101p8sd$phe5$1@dont-email.me> <u6l04kh2kfmgl5k0ekggbcpkia3ltm1qp0@4ax.com> <101pq5f$t350$2@dont-email.me>
User-Agent: ForteAgent/8.00.32.1272
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Lines: 315
X-Usenet-Provider: http://www.giganews.com
X-Trace: sv3-h06uJY/mgU4+wXIqRQRer7Im27585Wf06ooJckDEV7lXoS2Kj5JEuevEjVpa6TEc4TnpGFBTLm4ngrJ!hjzv0jNMXSGR3Csi99sj1OM3KXbRg5rq2ZqTo7tF+s4lBGdpLaDbfpAoaM75zH5o2w==
X-Complaints-To: abuse@giganews.com
X-DMCA-Notifications: http://www.giganews.com/info/dmca.html
X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers
X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly
X-Postfilter: 1.3.40
Bytes: 15626

On Wed, 4 Jun 2025 17:53:19 +0200, David Brown
<david.brown@hesbynett.no> wrote:

>On 04/06/2025 16:55, Joe Gwinn wrote:
>> On Wed, 4 Jun 2025 12:58:21 +0200, David Brown
>> <david.brown@hesbynett.no> wrote:
>> 
>>> On 30/05/2025 19:39, Joe Gwinn wrote:
>>>> On Fri, 30 May 2025 17:53:59 +0200, David Brown
>>>> <david.brown@hesbynett.no> wrote:
>>>>
>>>>> On 28/05/2025 18:07, Joe Gwinn wrote:
>>>
>>>>>> I recall those days.  Some managers thought that if they decreed that
>>>>>> no module could have a complexity (computed in various ways) exceeding
>>>>>> some arbitrary limit.  The problem was that real-world problems are
>>>>>> vastly more complex, causing atomization of the inherent complexity
>>>>>> into a bazillion tiny modules, hiding the structure and imposing large
>>>>>> added processing overheads from traversing all those inter-module
>>>>>> interfaces.
>>>>>>
>>>>>
>>>>> The problem with any generalisation and rules is that they are sometimes
>>>>> inappropriate.  /Most/ functions, modules, pages of schematic diagram,
>>>>> or whatever, should have a low complexity however you compute it.  But
>>>>> there are always some that are exceptions, where the code is clearer
>>>>> despite being "complex" according to the metrics you use.
>>>>
>>>> No, all of the complexity metrics were blown away by practical
>>>> software running on practical hardware.  Very few modules were that
>>>> simple, because too many too small modules carry large inter-module
>>>> interface overheads.
>>>>
>>>
>>> That changes nothing of the principles.
>>>
>>> You aim for low and controlled complexity, at all levels, so that you
>>> can realistically test, verify, and check the code and systems at the
>>> different levels.  (Checking can be automatic, manual, human code
>>> reviews, code coverage tools, etc., - usually in combination.)  Any part
>>> with particularly high complexity is going to take more specialised
>>> testing and checking - that costs more time and money, and is higher
>>> risk.  Sometimes it is still the right choice, because alternatives are
>>> worse (such as the "too many small modules" issues you mention) or
>>> because there are clear and reliable ways to test dues to particular
>>> patterns (as you might get in a very large "dispatch" function).
>> 
>> In theory, sure.  In practice, it didn't help enough to make it
>> worthwhile.
>> 
>
>OK.
>
>> 
>>> You don't just throw your hands in the air and say it's better with
>>> spaghetti in a module than spaghetti between modules, and therefore you
>>> can ignore complexity!  I don't believe that is what you are actually
>>> doing, but it sounds a little like that.
>> 
>> Peer review of the code works better, because no pattern scanning tool
>> can tell spaghetti from inherent complexity.
>> 
>
>That's certainly true in some cases.  It surprises me a little that your 
>experience was so much like that, but of course experiences differ.  My 
>experience (and I freely admit I haven't used complexity analysis tools 
>much) is that most functions can be relatively low complexity - the 
>inherently high complexity stuff is only a small proportion of the code. 
>  In one situation where this was not the case, I asked the programmer 
>to re-structure the whole thing - the code was badly designed from the 
>start and had become an incomprehensible mess.  Peer review did not 
>help, because the peer (me) couldn't figure out what was going on in the 
>code.

All true, but at the end of the day, complexity metrics and coverage
tools didn't come even close to paying for itself, and so they
gradually faded.


>However, it is entirely true that some code will be marked as very high 
>complexity by tools and yet easily and simply understood by human 
>reviewers.  If that is happening a lot in a code base, automatic tools 
>(at least the ones you are trying) are not going to be much use.

The issue is not human understanding per se, it's human effort.


>> And this goes double for operating system kernel code, which violate
>> essentially all of the coding standards developed for user-level
>> application code.
>> 
>
>Different code has different needs and standards, yes.
>
>> 
>>>>>>> Other tools that can be useful in testing are code coverage tools - you
>>>>>>> can check that your test setups check all paths through the code.
>>>>>>
>>>>>> We still do this, but the limitation is that all such tools yield far
>>>>>> more false alarms then valid hits, so all hits must be manually
>>>>>> verified.
>>>>>>
>>>>>
>>>>> A false alarm for a code coverage report would mean code that is not
>>>>> reported as hit, but actually /is/ hit when the code is run.  How does
>>>>> that come about?
>>>>
>>>> The code coverage vendors hold the details close, so we usually don't
>>>> know how hits are declared, and probably never will.
>>>>
>>>
>>> Do the gcc and gcov developers hold their details secret?  I'm sure
>>> there are many good reasons for picking different code coverage tools,
>>> and I'm not suggesting that gcov is in any way the "best" (for many
>>> reasons, code coverage tools would be of very limited use for most of my
>>> work).  And there are all sorts of different coverage metrics.  But it
>>> would surprise me if major vendors keep information about the prime
>>> purpose of the tool a secret.  Who would buy a coverage tool that
>>> doesn't tell you what it measures?
>> 
>> I was dealing with a proprietary code coverage tool that management
>> was quite enamored with and so was pressuring us to use.  But we had
>> only a sales brochure to go from, and I point-blank refused to use it
>> without knowing what it did and how.  This caused a copy of the
>> requirements document of the scanner to appear.
>> 
>
>No software tool can fix management problems :-(
>
>> I don't think gcov existed then.  We used gcc, so the software folk
>> would have used it were it both available and mature enough.
>> 
>
>Fair enough.  I haven't done anything significant with gcov, so I can't 
>say how good it might be.  (It is very difficult to use tools that write 
>data to files when you are working on small microcontrollers with no 
>filesystem and at most a small RTOS.)

In those cases, the development computers were far larger than the
target systems.


>>>> Maybe modern AI will do better, but may be too expensive to make
>>>> business sense.
>>>>
>>>
>>> We can pretty much guarantee that commercial vendors will add claims of
>>> AI to their tools and charge more for them.  Whether or not they will be
>>> better for it, is another matter.
>> 
>> Yes.  Don't forget Quantum.
>
>We are already into post-quantum algorithms, at least in some fields!
>
>> 
>> 
>>> I would expect AI to be more useful in the context of static error
>>> checkers, simulators, and fuzz testers rather than code coverage at
>>> run-time.

Forgotten from earlier: "Just to be clear - are you using
non-intrusive statistical code coverage tools (i.e., a background
thread, timer, etc., that samples the program counter of running code?
Or are you using a tool that does instrumentation when compiling?  I'm
trying to get an understanding of the kinds of "false hits" you are
seeing."

The focus here is non-intrusive code evaluation tools.

We also use intrusive tools and instrumentation in the integration
lab.


>> Why (AI)?  I would think that a LLM could follow the thread far better than
>> any static checker.
>> 
>
>I mean that I think there is more potential for adding useful AI 
========== REMAINDER OF ARTICLE TRUNCATED ==========