h a l f b a k e r yAlas, poor spelling!
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
People are using their CPUs / GPUs and whatevers for 'mining bitcoins'. (crunching data)
People also running background programs like 'SETI@Home' (and others) to crunch data.
I propose we combine the processes: so that bit coin miners are also crunching the stuff that contributes to science research
(Space, Cancer, stuff about Hadrons) a little bit; and the people who are already crunching the science data on their screensavers get a little bit of bit coinage as well ?
I predict that in doing so , Humanity will lead itself down the path to something resembling the Star Trek Utopian society - or whatever.
(In Star Terk They say they don't use 'money', but I reckon they have bitcoins really).
Please log in.
If you're not logged in,
you can see what this page
looks like, but you will
not be able to add anything.
Annotation:
|
|
PrimeCoin already tries to do this. It's a great idea if
it can be implemented. The difficulty lies in preventing
malicious users from supplying junk data. This isn't a
problem for SETI et al because they don't pay and there is
no incentive to cheat. |
|
|
I think a lot of people have suggested 'useful' computations for mining purposes, but unlike a hash function (or prime numbers), things like protein folding and SETI are very difficult to verify without performing the same amount of computation again, which defeats the point. |
|
|
// things like protein folding and SETI are very difficult to verify without performing the same amount of computation again, which defeats the point. // |
|
|
There's a solution to that. You just have to do the computations twice. But since you are now paying people to do the processing you probably more than doubled your computing power anyway. Just make sure that the data sets are sent out randomly enough that two computers owned by people who are associated never get the same data set. This would also protect against low probability corruption of results caused by a bad memory chips or CPUs. Potentially, for computers that have proven themselves reliable over many data sets, verification of data could be switched to be random samples rather than reverifying every data set, with any errors triggering a reverification of previous data sets processed on that computer. The pay per processing could increase up to almost 2x for a computer that was reliable over the long run and always matched up when data was double checked. |
|
|
Feedback of errors to the computer owner would allow them to find and replace unreliable memory. |
|
| |