h a l f b a k e r yNumber one on the no-fly list
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Please log in.
Before you can vote, you need to register.
Please log in or create an account.
|
The idea is to build a framework which allows people to
program micro-tasks on demand for bidders. The bidders
build programs from large tasks within this
framework/solver
which then decomposes it and then farms out the tasks
to be
solved by individuals. The tasks should decompose well
enough
so that a minimal amount of information is
needed to
complete the task. It's an auction based system to
optimize
profit for the developers and ensure quality for the
bidders
(ie better price == better programmer). This builds upon
many ideas of human computing.
like this?
http://www.vworker....er_RequestParm=true [Voice, Jul 12 2010]
Design by Contract
http://en.wikipedia.../Design_by_Contract [mouseposture, Jul 13 2010]
[link]
|
|
It will not work. The devil is in integrating the
details and the "big picture". There are so many
assumptions that programmers must make about
the
big picture when writing bits and pieces of the
code.
Even if you throw unit testing into the mix , it still
won't work because if the programmers are testing
the wrong thing, the system will still fail. On the
other side if you provide that missing background
information it will add too much overhead and
that
defeats the whole efficiency thing. |
|
|
Rent a Coder is a step toward that.I think u can
overcome the problem of dependency and integration detail, through the decomposition
step. That is however the "magic" step which
makes it all work, otherwise your observations are corrects. |
|
|
This is just a seed of the idea, hopefully someone
can come up with a scheme to decompose a
design into sub-tasks which have fixed and known
requirements and then fit it best to the people
qualified, in an efficient manner. |
|
|
The problem with the lego approach: |
|
|
Once you reach a certain complexity, perfomance suffers tremendously. And then you have to start from scratch. |
|
|
Also, most micro-tasks have _already_ been programmed. There are libraries for every problem imaginable. |
|
|
As has been pointed out, integration is the bigger issue. |
|
|
//There are libraries for every problem imaginable.// And
those libraries are scattered all over the place, with very
uneven documentation. Finding the library you need, and
dealing with its API is, effectively, what you'd be paying for,
in this scheme -- and what's wrong with that? But it's not
always right to use a library whenever you can. If you do,
you wind up with a lot of dependencies. |
|
|
This scheme ought to work well with the "Design by
Contract" approach to software development <link>. |
|
|
With the exponential increase in computation power
and more importantly access to said power, the rules
of programming will change. What takes teams years
to develop could be done using automation and
non-optimal lego approach for one off use
applications. Imagine tasking a supercomputer cluster
with making a application for cropping out images of
dogs form a realtime video stream, such esoteric one
off applications might be the norm in the future. |
|
|
Kind of, except that even when you increase the
computing power the problem is the same ... what
happens when the app suddenly runs slower
because of a change in it's code? .... what do you
do when your supercomputer application for
cropping out images
of dogs is tasked with doing the same for cats.
Let's say that dogs thing ran just fine, but because
someone didn't think things through on a higher
level, the cats part makes the whole thing 10x
slower (why? who knows ... we don't care right?) .
It will still work, just 10x slower. What
then? You'll need to hire someone who can see
the big picture and make it run as fast as it did
before. But they'll see the auto-generated micro
spaghetti code and run away. |
|
|
Or you can purchase 10x more computing power.
So your future 1000 node cluster cost $5/month
which
was a good deal. Will you want to pay $50/month
for those cats? |
|
|
I'm sure someone will. The idea is that in nominal
use case the program performs "adequately", in
exception cases where the programs functionality
is expanded or goes beyond normal use case it
might perform less than adequate, in that case it
might be cheaper to make a whole new program
than try to "kludge" the existing one. The idea is
that programs become one off commodities which
are made on demand. Also this segways into the
concept of no-maintaince programming, where
programs are discarded when they cease to serve
their purpose. |
|
|
If a person was a youtube animator who paste
witty messages onto videos of cats and dogs and
has a million subscribers they would pay 50$ for
such a service. In the coming information age,
content will be king and the tool to create
"unique" content will be in the most demand I
suspect. |
|
|
You missed my point. I didn't say they would not pay
$50 ... I said they would not pay the 10x difference.
If they were paying $50 they would not pay $500 ...
and if they were paying $500 they would not pay $5K
and so on. The 10x jump in hardware requirement is
very easy to creep in if the system is just patched
together with no overall plan for integration. |
|
|
I understand your points [ixnaum], my counter was
that programs of the future are a one off commodity which are built on demand with very
narrow focus. If the perform inadequately outside
of that domain they have to be "rebuilt" from
scratch, since that is probably "cheaper" then compensating using more CPU. |
|
|
Given the speed of progress were approaching, a
program even a year old could be hopelessly
antiquated and it's easier to rebuild it than try to compensate by throwing more cpu at it. |
|
|
To [bigsleep] your correct that step does require
either human guidance or direct design. |
|
|
//... hopelessly antiquated and it's easier to
rebuild it ... |
|
|
However attractive this may be (I almost changed
my vote for this because it's sounds so good) ...
having thought about it more, it's not realistic at
all. |
|
|
The reason old apps are not rebuilt is that the app
becomes it's own specification. For example, it's
tough to rebuild DOS from scratch
(even though it would take a programmer fraction
of a time to do it given today's technology). The
reason for this is that many of DOS's bugs, quirks
and oddities are
actually features. Every little oddity is necessary
in the software ecosystem. By rewriting it those
minuscule detail are lost. But is detail X really
important? Maybe, maybe not. Finding the
answer to those questions is a time consuming
job. |
|
|
Now, I know you will say, but DOS is an OS, so it's
not just a regular application. To that I answer
that it's very rare for an application (especially a
successful and useful application) to live in a
vacuum. Applications are part of a ecosystem,
that's why there are so many nasty moldy legacy
apps still hanging around. |
|
|
That's awful generous of you. |
|
| |