h a l f b a k e r yWe got your practicality ... right here.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Processes running on your computer can 'interrupt' each other and, if
your process has a higher priority than the one currently hogging
processor time, it will be given some running time.
This idea is
to base the interrupt priority of the UI process (which allows you to
move windows, press
buttons, etc.) on the dynamics of the mouse
movement. So, if the user is, for example, watching a never-ending
hourglass cursor on the screen and moving the mouse round and round
in tiny circles whilst shouting "Wake up you bastard machine!" the UI
will be given a high priority and the user will be permitted to press the
'Cancel' button.
I've seen a large number of computer users do
this kind of rapid mouse movement in the expectation that the
computer will react to it, so it seems sensible to make computers that
will react to it.
Event base programming
http://en.wikipedia...t-based_programming [goldbb, Jun 07 2010]
The type of asynchronous IO that I'm thinking of.
http://en.wikipedia...tion_queues.2Fports [goldbb, Jun 09 2010]
[link]
|
|
I've seen people do that, too, but unless that is a different window system architecture from those I'm used to, pressing a button involves the application that owns that button. If that application is blocked waiting for something, no amount of prioritizing it will help. |
|
|
I don't think this is a priority problem, but one having to do with the difficulty of implementing complicated operations in a way that lets them be interrupted, and with the difficulty of predicting whether seemingly innocuous steps - for example, looking up a domain name - will take a long time or not. |
|
|
Hmm - you're right. It would be nice though if there was a
way to get the behaviour the user expects: i.e. that moving
the mouse frantically will cause the computer to pay
attention and be more responsive. |
|
|
<sarcasm> Because by default, we'd like to be ignored and sluggishly responded to? </sarcasm> |
|
|
I don't know. Maybe in connection with power management. It could also emit the faint rumble of distant engines starting up.
But mostly, this is an idea for a prettier magic wand. |
|
|
[hippo]'s right. The psychology of the user is the given;
any failure to accommodate it is a flaw, and no amount of
"it's too difficult to implement, so you don't really want it"
will change that. It's a bit more reasonable to say "It's a
failure of the application programmer, not the OS," but I
still feel the OS has failed if it doesn't handle badly written
programs gracefully. |
|
|
//without checking for cancel events//
How about:
1) all threads have a timeout interval
2) at its expiry, an ALRM signal (or something) is sent
3) threads that don't respond appropriately have their
priority reduced, gradually, over time. |
|
|
4) Some threads, of course are supposed to run forever,
but that would have to be specified explicitly. Even a
programmer too lazy to write "abort gracefully on cancel"
code might just specify a reasonable number for the
timeout interval, when supplying arguments to
create_thread() (or whatever), rather than just "infinity"
every time. |
|
|
// [hippo]'s right. The psychology of the user is the given; any failure to accommodate it is a flaw, and no amount of "it's too difficult to implement, so you don't really want it" will change that. |
|
|
I don't think anybody here argued that "it's too difficult to implement, so you don't really want it". |
|
|
//I don't think anybody here argued that//
Neither do I. I associate you more with the position I
described as being more reasonable than that one. |
|
|
I think [mouseposture]'s point about the psychology of the
user captures what I was trying to say. If the mental model
the user has of the computer is one in which slamming the
mouse down on the mousemat and then shaking it from side
to side should elicit a response from the computer, then
engineers should build computers which do respond to such
input. |
|
|
//long standing bitch// We prefer to address her as "The Duchess of Cornwall". (Whoops, there goes my knighthood) |
|
|
//then engineers should build computers which do respond
to such input.// What happens when that part freezes up
too? |
|
|
Then its time for the 3 lb. mallet, in order to strike
any key properly. |
|
|
One possible solution would be to replace *all* system calls which could potentially block with a nonblocking request-type system call; there would be one and only one type of blocking system call -- "wait for next completed event(s)." |
|
|
The downside of doing this would be that, in practice, each regular (blocking) system call would need to be replaced by a (nonblocking) request call, followed by a dispatch loop (in which *at least* one "wait for event" call will be made), and thus more overhead. |
|
|
Delays can still occur, though, if a program needs to do a lengthy calculation that doesn't involve any system calls... that program won't service mouse clicks, etc, until the calculation is done, or unless the author included an explicit "deal with events" call. |
|
|
when mice first became popular they were connected to the video card. Presumably a video card which had all the information on the windowing parameters could very easily toss an interrupt when a pertinent line was crossed(literally), which would mean that only relevant UI input gets to the CPU... which would mean people wouldn't need to buy the next gen processors for another year or so. |
|
|
There are some almost-implementations of this - and they all use a "sandbox" approach - the best I've seen so far is in Chrome - each tab spawns a separate, sandboxed, independent process - so if it starts going aglay, you can just kill it and move to a different process - less "Cancel" more "Kill" but it's getting there - since each process runs in its own little predefined splodge, each of which have already been safely segregated off in terms of resources etc by the OS, if something goes wrong, the higher-level GUI process owned by the OS shouldn't be affected. |
|
|
That's fine for trivial little apps that are only ever likely to use 1% of the computer's available resources, but larger applications, this may not always be possible. Then again, larger applications should be industrial in nature and not let users fiddle about via dodgy GUI layers - except perhaps at the very fringes where it doesn't matter - but that takes you back to using a respectful 1% of the available machine. |
|
|
To pick up on [bigsleep]'s //Usually called 'asychronous'. Ten times harder to code for.// Yes, it is harder, but that is how we should be writing things these days - even a self-contained, fat-client, all in-one application is best written towards a client-server MVC pattern/architecture - whether for future-proofing, or whatever - it's just good practice. |
|
|
The problem arises when you are looking at software that allows direct interface directly with hardware - disk utilities and that kind of thing. |
|
|
// so it seems sensible // |
|
|
Yes, it does, but it's already too late ... the one that calls itself "Bill Gates" is already loose on your planet. |
|
|
bigsleep, you are correct, asynchronous is the term I wanted :) As for it being harder to code for, that would surely depend on what you were used to. |
|
| |