Deutsch English Français Italiano |
<slrnv2e0cf.m05.bencollver@svadhyaya.localdomain> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!weretis.net!feeder6.news.weretis.net!feeder8.news.weretis.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail From: Ben Collver <bencollver@tilde.pink> Newsgroups: comp.misc Subject: Simplicity And Computing Date: Tue, 23 Apr 2024 00:39:27 -0000 (UTC) Organization: A noiseless patient Spider Lines: 162 Message-ID: <slrnv2e0cf.m05.bencollver@svadhyaya.localdomain> Injection-Date: Tue, 23 Apr 2024 02:39:28 +0200 (CEST) Injection-Info: dont-email.me; posting-host="731b7f9fe35dabf2f2c96d094b489782"; logging-data="1313672"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18Mjx2ulun6PsGn0tZOwyo1js4oqLwMy00=" User-Agent: slrn/1.0.3 (Linux) Cancel-Lock: sha1:G2EfZ5zQXRjsvGIMfAWK5vr/pbI= Bytes: 9802 SIMPLICITY AND COMPUTING ======================== by Curt Sampson, 1999 This is an e-mail response I wrote to a mailing list recently, when a discussion of how computers will have to change in the future came up. It was intended to be a short response with a couple of my thoughts, but somehow turned into this article. On Wed, 3 Feb 1999, Blaine Cook wrote: > One of the points that Dr. Raduchel stressed was that computers > must become as intuitive as telephones or televisions for them to > become commodity devices, and truly become an everyday part of > people's lives.... I've thought about this for a bit, and I've come to the conclusion that this has already happened. I came to this conclusion while talking to a non-computer person who had seen the first couple of episodes of a series on the history of microcomputers on Knowledge Network (the one by Robert Cringley, of PBS fame). She was rather shocked at the rate of change in the microcomputer industry compared to other industries, and that's when I realised that most people haven't really seen the revolution that PLCs and embedded microprocessors have made in the consumer world. Cars, televisions and VCRs these days have some fairly sophicsticated software inside them, but people just don't notice this. Why is this? Perhaps it's because they're `merely' doing a better job at solving an existing problem that is, from the user interface perspective, fairly simple. After all, from the average driver's point of view, braking and accelerating a car are not a big problem; you push on one pedal or another, and the car slows down or speeds up. The calculation of exactly how much force one should apply to each wheel in order to maintain maximum traction is not a simple problem, but it's not the user's problem. Of course, this also takes a lot of control out of the hands of the user; the problem is not going to be solved in as optimial a way as it would be if an expert user could modify what the computer was doing based on current conditions. On the other hand, many people are not solving simple problems with computers. Sure, typing an essay seems like a simple problem at first, and it is if you do it on a typewriter. But are we really dealing with the same problem when we type it on a computer? Or do we now insist on the kind of typogaphical sophistication that once was available only to professional typesetters, page layout and graphics sophistication once used only by professional graphic artists and designers, spelling and grammar checking once available only from professional editors, and documentation markup sophistication once only the realm of SGML professionals? A problem I've noticed many times before came to light yet again when I was preparing the overheads for a presentation I did recently. For various reasons (including my complete inability to figure out how to get a Windows 95 system to print a raw postscript file), I decided to do this in Microsoft Word. This turned out to be quite a bad experience, and only survivable because I was once an expert on MS Word 5.0 for DOS (to the tune of some ten thousand pages of contracts with very specific formatting requirements). After some mucking about I finally did get my stuff into Word and get it formatted, but two things really annoyed me: * I wanted to start with my content, but had to pay attention to formatting from the very beginning. * MS Word's style sheet system, which is actually quite nice, between the DOS and Windows versions of Word somehow got buried beneath layers of other stuff. I think that these two points demonstrate the two branches of the complexity problem we're dealing with. I recently saw a lecture at Simon Fraser University by MIT professor Nancy Leveson. She reminded me that Fred Brooks (famed author of _The Mythical Man-Month_), in his essay _No Silver Bullet_, identifies two types of complexity we deal with in the software industry: essential and accidental. Essential complexity is that that is part of the problem itself. You can't get rid of it and still solve the problem; it must be dealt with. Accidental complexity is additional complexity that has been introduced into the problem, usually as part of the process of solving it. (In computing, this would be having to write something in assembler rather than a high-level language, for example.) Accidental complexity can be reduced without affecting the solution of the problem itself. Fred Brooks applies this to software development, but I think this applies equally to my experience with MS word. In point one above, I had my mind full of the content of my overhead slides, not the presentation. I knew that a particular sentence I was typing would be a header, or a bullet point, or a code example, but I didn't care what it looked like at the time, so long as I could read it. Unfortunately, it wasn't as easy as it could have been to tag things as different types of text, the formatting of which I would deal with later. Due to the graphical environment I often had to care very much what it looked like, becuase otherwise it wasn't readable on the screen. This is where I found the old text-based word processing much better; if something was in three-point compressed italic type, I didn't have to worry about it right then, because I didn't see it. I could deal with the formatting later, when I didn't have content to worry about. Having to deal with formatting right off introduced accidential complexity into my writing process, thus interfering with it. In the second case, we have a problem I've seen for a long time, but forsee no resolution for: hiding the essential complexity of a task in order to make it `simpler.' Current versions of MS word provide far too many mechanisms that go mucking about with your document, and encourage you to do things without understanding them. Some of the things (such as that damn paperclip) can be removed by a relative expert if he knows what he's doing. Others are just features built in that have to be dealt with. A lot of it is embedded into the `attitude' of the program itself, such as the fact that it's much easier to edit and apply formatting to individual elements than it is to styles. For me this was just an annoyince; distractions I had to ignore or push aside in order to deal with my formatting issues. For others, those who do not understand how word processing is different from typing, the core of how the program works is hidden from them, rather than being exposed. Anybody who's worked in a typing pool for any length of time has seen some poor WP operator inserting hard page breaks to make sure that paragraphs are not broken across pages, rather than marking the paragraph style as non-breakable. And usually he or she has no idea why this is the wrong solution to the problem. This is, in my opinion, the great failure of word processing: huge amounts of effort have been expended to make using a word processor look like using a typewriter, a piece of paper, or whatever, when it's just not the same thing. It hides the essential complexity of the task, the things you need to know to deal well with words in computer memory, and ends up confusing people more when they see what, for them, is non-intuitive behaviour. And this failure extends to almost every area of microcomputer use, as far as I can tell. I spent an hour the other day sorting out a web page designer who didn't understand why changes did or didn't appear on her `web page.' She didn't know that she was actually working with four separate copies of it (one on her local hard disk, one on the server's hard disk, and one from each of those sources in the computer's RAM memory). Once I explained to her how here data were being moved about and copied, she was able to control this bit of her universe. But until then, the complexity of computers that most software designers try so desperately to hide was making her life unhappy. So no, I don't think that making current PCs and their applications `simpler' or `more intuitive' is going to get anywhere. People who don't understand data movement and copying simply aren't going to be able to find their data. Word processing is really useful only when you don't use it like a typewriter. And you can't work a spreadsheet on the front panel of a microwave oven. What we need to do is to quit piling up layers of accidential complexity over the essential complexity of our computer-based applications in a hopeless attempt to make things less complex. We need to expose the complexity that needs to be there and make it as accessable as possible, so that people can deal with it, rather than avoiding it. From: <https://web.archive.org/web/19991001100548/ http://www.cynic.net/~cjs/computer/writings/simplicity.html>