Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
We don't have enough art & classy shit around here.

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


             

Funding Metric Applied To All Scientifically Arrived At Results

The scientists that agree that XYZ is awesome need to put their relationship with the entities that profit from XYZ, if any, in their documentation.
 
(0)
  [vote for,
against]

Apply this to avoid any charges of conflict of interest.

If my company made XYZ I'd totally support this. I want the truth out there and don't want to be paying anybody to sell a concept.

This is just admitting that scientific method is applied by humans, and like my hero Ernst Mach pointed out, you can't judge the result without considering the point of reference.

Likewise, any profit that might be gained from negative studies on XYZ need to be part of the formula review too.

doctorremulac3, Oct 13 2024

Please log in.
If you're not logged in, you can see what this page looks like, but you will not be able to add anything.
Short name, e.g., Bob's Coffee
Destination URL. E.g., https://www.coffee.com/
Description (displayed with the short name and URL.)






       Actually this is what we do already. It's standard practice.
Loris, Oct 13 2024
  

       But how often do researchers come up with results that counter the interests of those funding them?   

       That'd be an interesting study. Looked it up and only saw speculation, no actual studies. At least in my brief check.
doctorremulac3, Oct 13 2024
  

       //Cool, so how often do researchers come up with results that counter the interests of those funding them?//   

       The thing to do there is - with studies which may potentially give results counter to those desired by a sponsor - is to require that projects are pre-registered.
The idea is you declare what you're going to measure, and how you're intending to do it, so there isn't wriggle-room to p-hack the results or whatever.
  

       Obviously, this doesn't apply to the majority of research -effectively, where you're looking for new stuff, or don't know what you're going to find.
Lots of research is funded by organisations with an interest in the result, but it's more "Can we figure out how this thing we have works?" or "Can we find or make a new thing to use?", rather than "How much better is this established product than the alternatives".
Nevertheless, papers from research funded by industry will still often carry a conflict of interest notice, to make sure it is beyond reproach.
Here is one, from one of my papers:
::Competing interests: I have read the journal’s policy and the authors of this manuscript have the following competing interests: Authors specified as employees of GSK have competing interests because this work was part of a project to develop technology which would have value to GSK; all authors associated with the university of Birmingham have competing interests because the university of Birmingham would benefit financially if the joint project with GSK led to a process that was deemed to be profitable to GSK. This does not alter our adherence to PLOS ONE policies on sharing data and materials.::
  

       Realistically, there was no way as an "author associated with the university of Birmingham" that I would benefit financially from the paper coming out - the project was long over and I'd been working unpaid on it for years.   

         

       There are other known issues with the literature, of course. One is that it's much easier (and beneficial to the researchers and publishers) to publish a positive result (that is, unexpected or interesting findings), and perhaps there are several groups of independent researchers performing the same experiment. So the papers which are published can have spurious results. This is behind the 'replication crisis' in a few fields (where some claimed results fail to be repeated).
In fields where there are relatively few interesting questions, and many people doing the same thing - like medicine - there are analytical tools to help deal with this, like funnel plots.
Loris, Oct 13 2024
  

       This is going to sound completely naive but I've often wondered if unreproducible results of old experiments were undertaken without altitude in mind?   

       Air pressure changes everything.   

       ... This finding brought to you by Big Air.   

       (They're trying to make you dependent on it, you know.)
pertinax, Oct 14 2024
  

       //This is going to sound completely naive but I've often wondered if unreproducible results of old experiments were undertaken without altitude in mind?//   

       It's possible. But... well, many, many things can affect an experiment. Particularly in biology. Hence Harvard's law: “Under the most rigorously controlled conditions of pressure, temperature, volume, humidity, and other variables, the organism will do as it well damn pleases.” (I found this in a slightly different form attributed to Larry Wall; I'm not confident he made the statement originally.)   

       But bear in mind, quite often in experiments you're doing comparisons. That is, you have an experimental group (or perhaps several) and a control group, with (ideally, not not necessarily) one variable different. So the /other/ overall conditions are the same for all groups.   

       And also, you don't just do the experiment once. At least, not if you're rigorous.
I know you don't believe statistics, but the reason it's necessary is to deal with the random shit which happens occasionally.
Loris, Oct 14 2024
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle