Fraud, so much fraud | Hacker News

These sorts of articles raise so many thoughts and emotions in me. I was trained as a computational biologist with a little lab work and ran gels from time to time. Personally, I hated gels- they’re finicky, messy, ugly, and don’t really tell you very much. But molecular biology as a field runs on gels- it’s the priimary source of results for almost everything in molbio. I have seen more talks and papers that rested entirely a single image of a gel which is really just some dark bands.

At the same time, I was a failed scientist: my gels weren’t as interesting, or convincing compared to the ones done by the folks who went on to be more successful. At the time (20+ years ago) it didn’t occur to me that anybody would intentionally modify images of gels to promote the results they claimed, although I did assume that folks didn’t do a good job of organizing their data, and occasionally published papers that were wrong simply because they confused two images.

Would I have been more successful if fewer people (and I now believe this is a common occurrence) published fraudulent images of gels? Maybe, maybe not. But the more important thing is that everybody just went along with this. I participated in many journal clubs where folks would just flip to Figure 3, assume the gel was what the authors claimed, and proceed to agree with (or disagree with) the results and conclusions uncritically. Whereas I would spend a lot of time trying to understand what experiment was actually run, and what th e data showed.

Similar – when I was younger, I would never have suspected that a scientist was committing fraud.

As I’ve gotten older, I understand that Charlie Munger’s observation ““Show me the incentive and I will show you the outcome.” is applicable everywhere – including science.

Academic scientists’ careers are driven by publishing, citations and impact. Arguably some have figured how to game the system to advance their careers. Science be damned.

I think my favorite Simpsons gag is the episode where Lisa enlists a scientist (voiced by Stephen Jay Gould) to run tests to debunk some angel bones that were found at a construction site.

In the middle of the episode, the scientist bicycles up to report, dramatically, that the tests “were inconclusive”.

In the end, it’s revealed that the bones were a fraud concocted by some mall developers to promote their new mall.

After this is revealed, Lisa asks the scientist about the tests. He shrugs:

“I’m not going to lie to you, Lisa. I never ran the tests.”

It’s funny on a few levels but what I find most amusing is that his incentive is left a mystery.

Well, the incentive is that he didn’t want to run the tests out of laziness (i.e. he lacked an incentive to run them). He ran to Lisa to give his anticlimactic report not to be deceptive, but rather he just happened to be cycling through that part of town and just needed to use the bathroom really badly.

The writers of these episodes were really on another level considering it was a cartoon.

Lisa’s first word is still a personal favourite of mine, especially now as a father.

To be honest, it’s difficult to tell if the subplot makes sense on purpose, or if the writers just wanted to make a joke and it just happened to end up making sense. I don’t think I had ever put the three scenes together before now.

More often than not in scientific fraud I’ve seen the underlying motives be personal beliefs than financial. This is why science needs to be much stronger in weeding out the charlatans.

That’s a good one. In my experience, corruption is almost always disguised as neglect and incompetence. Corrupt people meticulously cover their tracks by coming up with excuses to show neglect; some of them only accept bribes that they can explain away as neglect where they have plausible deniability. It doesn’t take much brainpower to do well, just malicious intent and knowing the upper limits.

IMO, Hanlon’s razor “Never attribute to malice that which can be adequately explained by stupidity” is a narrative which was created to condition the masses into accepting being conned repeatedly.

On the topic, I subscribe to Grey’s law “Any sufficiently advanced incompetence is indistinguishable from malice” so I see idiots as malicious. In the very best case, idiots in positions of power are malicious for accepting the position and thus preventing someone more competent from getting it. It really doesn’t matter what their intent is. Deep down, stupid people know that they’re stupid but they let their emotions get in the way, same emotions which prevent them from getting smarter.

Ehh… I think neglect and incompetence are super common. I have a sink full of dishes downstairs to prove it. I think corruption, while not rare, is still far rarer. Horses over zebras still (at least in the US).

>outcomes we collectively agree upon.

lol, what are the chances?

The average Joe is interested in Dem vs Rep or what the latest show on Netflix.

The average researcher is worried about his livelihood, tenure etc.

This is a weird take, assuming the average researcher cannot be an average Joe, and also that average people aren’t also worried about their livelihood…you might want to revisit your view of the world.

Far more educated. But certainly not smarter.

And it’s not at all clear if that education does anything other than magnify their intellectual predispositions. The smart people can make great strides, but the stupid will be stupid louder and harder. And the average may well just be… more average.

IQ levels has risen on average because of nutrients, not using lead and education.

An increase in blood lead from 10 to 20 micrograms/dl was associated with a decrease of 2.6 IQ

If one only cares about democrats vs dems (or left vs “far right” in Europe) one also doesn’t REALLY care about politics.

Caring = caring to understand how the system works and how the incentives work for participants in it.

If I’m super generous I would guess maybe 0,1 percent of the population cares by that definition.

I think you’re a little too harsh in that judgement. I’d say it’s at least 1%. Possibly even 5%.

It still means that they are massively, massively outweighed by loud tribalism.

There’s a problem that if you care, as an average person, it’s hard to do much with it. Every few years you can vote left or right, which unless you happen to live in a marginal constituency or swing state, has no effect.

>Every few years you can vote left or right,

If you’re talking about the US, you can vote center-right (Democratic) or far right (Republican). There is no viable left wing party in the US.

If humanity is to mature, we must be critical and take responsibility for ourselves, particularly when the alignment of others are concerned. Such as starting by disagreeing with everything, and validate for one’s own.

> Academic scientists’ careers are driven by publishing, citations and impact. Arguably some have figured how to game the system to advance their careers. Science be damned.

I’ve talked to multiple professors about this and I think it’s not because they don’t care about science. They just care more about their career. And I don’t blame them. It’s a slippery slope and once you notice other people who start beating you, then it’s very hard to stay on the righteous pad(note). Heck I even myself in the PhD have written things I don’t agree with. But at some point you have to pick your battles. You cannot fight every point.

In the end I also don’t think they care that much about science. Political parties often push certain ideas more or less depending on their beliefs. And scientist know this since they will often write their own ideas such that it sounds like it solves a problem for the government. If you think about it, it’s kind of a miracle that sometimes something good is produced from all this mess. There is some beauty to that.

(note) I’m not talking about blatant fraud here but about the smaller things like accepting comments from a reviewer which you know are incorrect, or using a methodology that is the status quo but you know is highly problematic.

> They just care more about their career.

It’s not even that.

In most fields, you need lab space and equipment and grad students to get stuff done. And to pay for that, you need funding. And to get funding, you need to publish and apply for grants.

You have to pay attention to the “career” side of things — otherwise, you won’t get to do the science at all.

> Academic scientists’ careers are driven by publishing, citations and impact.

Publishing and citations can and are gamed, but is impact also gamed on a wide scale? That one seems harder to fake. Either a result is true and useful, or it’s not.

Actual real world impact? Hard to game. But, nobody measures that. Everything that’s tracked is a circularly defined success metric (you’re successful if other academics consider you successful).

Convince (lobby) a politician that your research will save the trees/whales/global warming/end starvation/any other fear inducing thing and get funding to bribe (lobby) more politicians to further your “research” until anyone would be a fool to question the science.

I feel like this same story happens every year and people are surprised. I often wonder how many “academic” or “scientific” quotes to not conflate valid research and study) need to happen before it becomes as distrusted as politicians.

The article mentions that there are two drugs that resulted from this research. One of them failed a trial recently. Nobody knows if this fraud means that the drug never could have worked, or if this was just bad luck. So yes: people do measure real-world impact. It’s just that it takes a very long time and there are plenty of confounding factors, since even non-fraudulent drug research can fail.

Oh, yes. Repackaging and reframing of data – as mentioned in the OP article too – is a common practice for farming impact and article numbers too.

Why do novel research when you can just partner up with your friends, bring your data and combine with theirs for the umpteenth paper on the same thing, then wade into spamming down the submissions in every academic journal with impact in your field! If it published once in this journal, surely the 3rd recombination will too (and more often than not… it does.)

Lots of people doing research find this depressing to the point of quitting. Many of my peers left research as they couldn’t stomach all this nonsense. In experimental fields, the current academic system rewards dishonesty so much that ugly things have become really common.

In my relatively short career, I have been asked to manipulate results several times. I refused, but this took an immense toll, especially on two occasions. Some people working with me wanted to support me fighting dishonesty. But guess what, they all had families and careers and were ultimately not willing to do anything as this could jeopardize their position.

I’ve also witnessed first-hand how people that manage to publish well adopt monopolistic strategies, sabotaging interesting grant proposals from other groups or stalling their article submissions while they copy them. This is a problem that seldomly gets discussed. The current review system favors mono-cultures and winner-takes-it-all scenarios.

For these reasons, I think industrial labs will be doing much better. Incentives there are not that perverse.

> I’ve also witnessed first-hand how people that manage to publish well adopt monopolistic strategies, sabotaging interesting grant proposals from other groups or stalling their article submissions while they copy them. This is a problem that seldomly gets discussed.

Agree. Everyone has heard about the extreme fraud cases, but the casual toxicity that the pressure cooker environment elicits is rarely discussed and probably a far larger problem. I say that as someone who has spent too much time in this environment and never witnessed outright fraud – that I know of.

Nah, I think the article has good intentions. And rampant fraud is important to address.

But as others said, addressing smaller shenanigans is also crucial to steer things in the right direction.

> Similar – when I was younger, I would never have suspected that a scientist was committing fraud.

Unfortunately many less bright people seem to interpret this as “never trust science”, when in reality science is still the best way to push humanity forward and alleviate human suffering, _despite_ all the fraud and misaligned incentives that may influence it.

In defence of the “less bright people” or deplorables as others have called them – they are deeply suspicious of Science(tm) used as a cudgel.

They intuit that some parasitic entity or entities has latched on to Science and is co-opting it for it’s own gain to achieve it’s own purposes which run counter to the interests of the people.

The heavy handed Covid response and censorship is a prime example of that.

The whole system has been corrupted and therefore it is not possible to have a de-facto assumption of good faith of the actors.

I think this comment comes across as slightly ignorant.

Many examples exist where a misguided belief in scientific ‘facts’ (usually a ropey hypothesis, with seemingly ‘damning’ evidence), or a straight up abuse of the scientific method, causes direct harm.

Suspicion is often based on facts or experience.

People have been infected with diseases without their knowledge.

People have been forced to undergo surgical procedures on the basis of spurious claims.

People have been burnt alive in buildings judged to be safe.

And look at Boeing.

No one has a problem with science itself per se. Everyone accepts the scientific method to be one of our greatest cultural achievements.

But whether one is “less bright”, or super smart, we all know we as humans, are prone to mistakes, and are just as prone to bend the truth, to cover up those mistakes.

There’s nothing plebeian about this form of suspicion. In fact, the scientific method relies on it (peer review).

Preventing people from working if they didn’t get a covid vaccine was a bit heavy handed.

And saying it was likely made in a lab in China is kind of censored to this day. I think partly because the science community doesn’t want to take flack for doing risky stuff and killing millions.

> Preventing people from working if they didn’t get a covid vaccine was a bit heavy handed.

Nobody did that. They prevented you from working with me

Nobody here or there was forced to get a vaccine. But if you refused it was right to shun you

Freedom is about more than the individual. We as a group should be free from the consequences of individual actions

A decent chunk of the pandemic response was politicians power tripping in the name of The Science and later having to roll things back, either because of public backlash (eg hotlines to encourage snitching on their neighbors), because it was actually illegal (requiring all large businesses to have their employees vaccinated or tested weekly), or because of politics (initially telling the public that masks were ineffective, then tripling down on mask mandates, Harris saying the vaccine could not be trusted based on Trump talking about its efficacy).

There was also dumb stuff like social media suppressing mention of covid, even to this day youtubers use euphemisms to refer to that period.

To me it seems perfectly understandable how people who aren’t actually involved in science might mix up The Science and actual science after all the political nastiness of those years, especially when we add on top all of the awful pop science reporting from the past decades.

It’s not that it was heavy handed. But it was completely nonsensical.

For example here restaurants were open, but they had to close at 19. So instead of spreading the clientele over more hours, they were always 100% full.

Also, they CUT ⅔ of public transport rides, so they were incredibly overcrowded. People with real jobs that can’t be done from home still had to go to work. BUT they put stickers on the floor telling people to keep distance. Also hired people to be at crowded stops to spray hand sanitizers on who wanted it, and tell people to keep distance (while seeing them having to push their way in).

In general all the restrictions were about the “having fun” stuff, but not about the “go to work” stuff. Even companies had no obligation to let people who could work from home stay at home. Some companies kept having their offices full.

Oh and let’s not forget the recommendations of staying home if you so much as sneezed. But you wouldn’t get paid. How did they expect people to pay their rent?

I could go on for hours with this. The bullshit measures that were marketed as “what the scientists are telling us to do” did a lot of harm to the trust that the general population puts into science.

I attended the opening talk of a local science festival.

The speaker was a psychology researcher from UK who flew here for that, and the talk was about conspiracy theories. When they introduced her they stated that she wouldn’t accept any questions from the audience.

This was received with boos and shouts that it was not real science.

She then proceeded to bundle all the conspiracy theories together. Going from “the government is doing something bad” to “earth is flat”.

After that talk I can really believe that the bullshit conspiracy theories are made up and spread artificially so that anyone that comes up with any conspiracy theory can be shushed as a crazy person.

But… in reality conspiracies do exist. One can make a theory and then test if it’s true (or get killed/imprisoned by the government while trying).

You’re not wrong, but people who oppose “science as a cudgel” tend to support “religion as a cudgel”, and don’t see a difference between science and religion, except that one is the Yellow team and one is the Purple team, and they have a preferred color.

I try to distinguish between “the scientific process” and building scientific consensus. As rigorous as the scientific process may be, building consensus is always a messy and human thing.

Why? Some guy writing an op-ed saying how frustrating it is that science is full of fraud is a great argument for the scientific method? There have been people writing articles like this for over 20 years if not much longer about all kinds of fields. Nothing ever happens, nothing ever improves, it never goes beyond people saying “tut tut how terrible”. This sort of thing is entirely predictable and will keep happening, over and over again. On the current course, there will be articles just like this one being discussed in another twenty years from now.

I don’t think Derek Lowe is frustrated that “science is full of fraud”, this is likely editorialization on your part. It seems that it stems specifically from Masliah, who is common across all papers in the dossier. Granted, Masliah appears to be prolific, so this is admittedly a large issue in the peer review and verification structure in this field.

To put this into context though:

Let’s begin by supposing that fraud exists in all ventures where people stand to gain, which I don’t think is controversial at all, especially not in this comment section.

In light of this assumption the fact that this all came out in the first place is proof that being a luminary does not make you immune from investigation. That this happens ‘over and over again’ simply means that eventually we catching this fraud. The fact that the scientific community is constantly trying to reproduce and verify is why these become public in the first place.

So on the contrary, it’s not that nothing ever happens or nothing ever improves. There will be articles like this one in twenty years because there will still be fraudsters in twenty years, and there will still be scientists working to verify their work.

I don’t think it’s true that eventually we are catching this fraud 🙁 This keeps happening because so much is out there, it doesn’t follow that all or even most of it is being caught. Even a tiny fraction of a fraction of a percent being caught would yield a constant stream of such stories. I have a collection of articles on my blog dating back years that cover various fraudulent papers in different fields, and even whole fields in which the bulk of all papers are based on fraud (e.g. the literature dealing with misinformation bots on Twitter). None of them have ever been retracted or even had any of the problems be acknowledged outside of the blogosphere.

It’s really hard to understand the scale of the problem until you wade through it yourself. Fraud is absolutely endemic in science. Dig in and you can easily uncover bogus papers, and none of them will ever be acknowledged or retracted. In particular there’s a nasty attitude problem in which reports of fraud from outside the academic institutions will frequently be written off as “right wing” and thus inherently illegitimate. This can happen regardless of the nature of the criticism or whether it’s in any way political. Literally, things like bug reports or reports of numbers that don’t add up can be discarded this way. Thus they implement an unwritten rule that only academics are allowed to report fraud by academics, and of course, they are strongly incentivized not to do so. So Lowe is correct. It’s really a mess.

> Unfortunately many less bright people seem to interpret this as “never trust science”

Unfortunately many “smart” people insist on telling “dumb” people how to think instead of having the introspection and humility to examine where we’ve gone wrong and spending a lot of time and effort on fixing it.

No, easier to gaslight the idiots

Exactly. “This is bad because dumb people won’t believe us.”

Not “This is bad because it undermines science, is lying, and unethical, regardless of what people think.”

A lot of people are working to fix the K-12+ educational system which is the root cause of many stupid people, but beyond that, it’s objectively hard to fix stupid.

Most people, stupid or otherwise, wouldn’t take a critical thinking course, for example. Many would have no time for it, to say little of motivation. Fewer are proud of being stupid and will shun anything they consider “intellectual”.

> There are more new cases of cancer in the United States now than there have ever been before.

Sigh

That is because antibiotics, essentially

The mechanism by which antibiotics cause cancer is they stop you dying from bacterial infection, once a huge (biggest?) killer.

You still have to die of something….

I’m not sure the two uses of “science” in your post are using the same meanings of the word.

Science is the name given to a few different processes, a body of knowledge and statements by designated spokespersons. Each of these have different flaws and failure modes in different environments and domains.

All science does is show us how to move a whole bunch of piles of shit over into one big pile of shit, off in the corner. Or perhaps on to an unsuspecting group of poor people because, the burden demands to be he held and somebody has to hold the bag of shit. Right?

We may interpret this as convenience… But the tragedy of the commons says that we can’t even have science if someone isn’t holding what it is we don’t want to be holding… I’m not saying I didn’t love science or not think it’s super interesting or anything… Can we really say it alleviates suffering or does it displace it for one group of people until a new problem comes in and takes that one’s place? How many people here will be holding the bag of shit tonight? USA numba 1!!!

The Manhattan project was a government project that was run like a startup.

If such a project happened today, academic scientists would be trying to figure out ways to bend their existing research to match the grants. Then it would take another 30 years before people started to ask why nothing has been delivered yet.

Run like a startup in what respect?

It was a massive government-directed military project in wartime that was able to recruit all the top theoretical physicists at the time around a common aim in an urgent technological arms race to build the bomb. It included a vast effort of army engineers to build the facilities to process the fuel and so on. I’m not seeing the parallels with startups.

Even your description sounds like a startup, to me.

There was a hook to get the funding (easy to get weapons funding in wartime).
Recruiting the top talent.
Urgency (beat everybody else to the punch).
Outsourcing the building of infrastructure while you focus on the unique/hard part.

I’m not seeing how you can’t see the parallels with startups.

In that case any high-priority military intelligence project is “like a startup”. Why say that it’s run like a startup as opposed to just saying it was run like a high-priority military-intelligence project?

The GP suggested that a reason for the success of the Manhattan project was that it was run like a startup, whereas it seems more illuminating to point out that it was a massively funded military project in wartime. I was curious if there was some more specific rationale for the startup comparison

I just want to echo this!

They built a gosh darned city… oh wait, I was wrong, they built three. It was run like an extremely high-value military project, which is exactly what it was. Sure it was more theoretical than other military projects(at the time), but that is the game sometimes.

I get the sense that some folks just think “faster then we would do it now” is the same as startup. Which, to put it politely, I strongly disagree with. Startups are great and I am grateful for the daily value adds to my life, but pretending everything “fast” has startup mentality is just missing the mark.

Nuclear physics had just “cracked open” and there were lots of highly promising prospects to pursue. You can’t recreate that historical situation by switching from agile to scrum, or from scrum to agile.

> Arguably some have figured how to game the system to advance their careers

lol arguably? i would bet my generous, non-academia, industry salary for the next 10 years, that there’s not a single academic with a citation count over say … 50k (ostensibly the most successful academic) that isn’t gaming the system.

– signed someone who got their phd at “prestigous” uni under a guy with >100k citations

That attitude coincides with current delusion in our society that science is perpetuating a fraud at the level of religions whose leaders are trying to control their flock for financial and sexual gain.

A broken system that incentivizes fraud over knowledge is a real problem.

An assertion that scientists chase the money by nature is a dangerous one that will set us back to the stone age when instead we should be traversing the space as a whole.

At some point, the good scientists leave and the fraudsters start to filter for more fraudsters. If that goes on, its over- the academia has gone. Entirely. It can not grow back. Its just a building with conman in labcoats.

My suggestion stands: Give true scientists the ability to hunt fraudsters for budgets. If you hunt and nail down a fraudster, you get his funding for your research.

That is a ridiculous exaggeration. Yes, like in every walk of life, fraud happens. However, the extreme success of academic science shows that most of it is real, honest, work.

It is field dependent but I’m not entirely against what the parent said. I work in ML and I am positive that all this is going on(0). There’s lots of true believers though and that’s what makes things extra hard. Sometimes the fraudsters take over by making the system become incompetent and everyone is in good company. In this was fraud isn’t committed with intent, weirdly enough.

Just look at all the ML reasoning papers. Wither you believe LLMs reason or not, an important factor you have to disentangle when trying to prove this is what data the models were trained on. To distinguish memorization from reasoning. You won’t find this analysis because it’s almost impossible given that the data is a trade secret, even by Meta.

This year at ACL a paper (mission impossible language models) won best poste paper despite their results running contrary to their claim, and very obviously so too.

Or there is the HumanEval paper which proposed that they created a data set which was not spoiled because they “hand wrote” over a hundred “Leetcode style problems”. 60 authors and they didn’t bother to check… But why would you check when the questions are things like “calculate the mean”. What fucking programmer thinks there isn’t python code on GitHub pre 2021 that: calculates the mean, takes the floor, checks if a string is a palindrome, calculates greatest common devisor, or any similar question. How did this become an influential dataset‽

(0) the big reason I’m upset is because I love the field. I’m not in it for money. I’m in it because I grew up on Asimov books and because I want our community to work towards AGI. But now every person that can do print(“hello world”) feels that they can lecture me, a published researcher about what these machines do while they talk about the Turing test (lol, what is this, the 60’s?) and how they’re black boxes (opaque, but certainly not black). I’m fine with armchair experts, but not when they come in swinging with a baseball bat

How so? The fraction of academic science that is applied to anything, anywhere, with clearly identifiable impact is both a tiny fraction of academic science, and also often detrimental quality of impact.

Thats not true entirely. There is research, with huge impacts and the money leveraging- keeps it brutally honest. Nothing makes it out of a lab and into a fab, without certainty of the method working at least in lab conditions reproduceable. There are many billions, but not that many billions.

Pretty much every industry functions on a foundation of basic scientific knowledge discovered in academic labs, run by honest people trying to understand the natural world.

Fraud happens. Bad theories happen. The slow turn of scientific wheels takes centuries to crush them but it will always win. Profit doesn’t turn those wheels. Our entire modern lifestyle is the impact.

It becomes a survival bias: if people can cheat at a competitive game (or research field) and get away with it, then at the end you’ll wind up with only cheaters left (everyone else stops playing).

As they say: the scum rises to the top, true for academia, politics etc, any organization really.

Quote: “The Only Thing Necessary for the Triumph of Evil is that Good Men Do Nothing”

My own nuanced take on it:

Incompetent people are quick to grab authority and power. On the other hand principled, competent people are reluctant to take on positions of authority and power even when offered. For these people positions of power a)have connotations of a tyrant b) are uninteresting. (i.e technical problems are more interesting) . Also the reluctance of principled people to form coalitions to keep out the cheaters, because they are a divided bunch themselves exacerbates the problem, where as the cheaters often can collude together (temporarily) to achieve their nefarious goals.

Those you’d want to lead recognize leadership is a responsibility often times marked by personal sacrifice.

Those you’d never want to lead mostly regard leadership as a privilege used for personal gain.

I would add c) often entail incalculable risk for any who aren’t already corrupt enough to be climbing into bed with other evil powers or blackmailing, extorting, or exploiting their way into a safe haven and a golden parachute.

Who willingly remains vulnerable before the Sword of Damocles?

You could improve the situation by incentivizing people to identify cheaters and prove their cheating. If being a successful cheater-hunter was a good career, the field would become self-policing.

This approach opens its own can of worms (you don’t want to overdo it and create a paranoid police-state-like structure), but so far, we have way too little self-policing in science, and the first attempts (like Data Colada) are very controversial among their peers.

And thus we have the Earth. Where all looks like a broken MMO in every direction. Everybody refuses to participate, because it’s 100% griefers, yet nobody can leave.

Business: Can you get a law written to command the economy to give you money or never suffer punishments? Intel fabs (https://reason.com/2024/03/20/federal-handout-to-intel-will-…), Tesla dealers (https://en.wikipedia.org/wiki/Tesla_US_dealership_disputes), Uber taxis (https://www.theguardian.com/news/2022/jul/10/uber-files-leak…), ect… Are you wealthy enough there’s nothing “normals” can really do? EBay intimidation scandal (https://en.wikipedia.org/wiki/EBay_stalking_scandal).

Economic Academia: Harvard Prof. Gino (https://www.thecrimson.com/article/2024/4/11/harvard-busines…)

Materials Academica: Doping + Graphene = feces papers (https://pubs.acs.org/doi/pdf/10.1021/acsnano.9b00184) “Will Any Crap We Put into Graphene Increase Its Electrocatalytic Effect?” (Bonus joke! Crap is actually a better dopant material.)

Gaming: Roblox double cut on sales (that people mostly just argue about how enormous it is, because the math’s purposely confusing) (https://news.ycombinator.com/item?id=28247034)

Politics: Was Santos ever actually punished?

Military: The saga of the Navy, Pacific Fleet, and Fat Leonard (https://en.wikipedia.org/wiki/Fat_Leonard_scandal) “exploited the intelligence for illicit profit, brazenly ordering his moles to redirect aircraft carriers, ships and subs to ports he controlled in Southeast Asia so he could more easily bilk the Navy for fuel, tugboats, barges, food, water and sewage removal.”

Work: “Loyal workers are selectively and ironically targeted for exploitation” (https://www.sciencedirect.com/science/article/abs/pii/S00221…)

There’s others, that’s just already so many…

> don’t really tell you very much

???

I think this statement is either meaningless or incorrect. At the very least your conclusion is context dependent.

That being said, I ran gels back in the stone ages when you didn’t just buy a stack of pre-made gels that slotted into a tank.

I had to clean my glass plates, make the polyacrylamide solution, clamp the plates together with office binder clips and make sure that the rubber gasket was water tight. So many times, the gasket seal was poor and my polyacrylamide leaked all over the bench top.

I hated running them. But when they worked, they were remarkably informative.

I used to work with someone up until the point I realized they were so distant from any form of reality that they couldn’t distinguish between fact or fiction.

Naturally, they are now the head of AI where they work.

Hacker news is completely flooded with “AI learns just like humans do” and “AI models the human brain” despite neither of these things having any concrete evidence at all.

Unfortunately it isn’t just bosses being fooled by this. Scores of people push this crap.

I am not saying AI has no value. I am saying that these idiots are idiots.

What evidence for „AI models the human brain“ do you want? Isn’t a neural network pretty clearly a simplified model of the working of the human brain? What is there to prove?

Neural networks are not a model of the working of the human brain. They are based on an extremely simplified approximation of how neurons connect and function (which while conceptually similar is a terrible predictive model for biological neurons) and are connected together in ways that have absolutely zero resemblance to how complex nervous systems look in real animals. The burden of proof here is absolutely on showing how LLMs can model the human brain.

One clear piece of evidence would be ruling out “AI models the corvid brain” or “AI models the cephalopod brain” which might narrow it toward the human brain.

That it’s functionally impossible to do either leads me to believe that “it models some form of intelligence” is about the best we can prove.

I don’t understand the standard of modeling you seem to assume.

Modeling a human brain, a cephalopod brain and a corvid brain aren’t even mutually exclusive if your model is abstract enough.

When I say „a neural network models a human brain“, I’m talking about the high level concept, not the specific structure of a human brain compared to other brains. You could also say that it models a dogs brain if you wanted to. It’s just the general working principle that is kind of similar. Does that not count as a model to you?

Edit: Here’s a simple example:
I would say that a simple SIR model „models COVID infection in humans“. But that doesn’t mean it can’t also model Pig Flu in Pigs. It’s a very abstract model so it applies to a lot of situations, just like a neural network basically models the brain of every reasonably advanced animal.

I think a lot of people don’t abstract their brain model when they say “models a human brain”, or they’d say “models biological intelligence”, etc. Specifically, I don’t think there are any human traits in LLMs other than having mostly been trained on human outputs. They see tokens and predict tokens; very different sensorium from humans. There aren’t any specific corvid or cephalopod traits either afaik.

Biological brains don’t use gradient descent and don’t seem to use 1-hot encoding/decoding for the most part.

The reality on the ground (for me) has been refreshingly sane.

I work at a company with a substantial BI/ML footprint. Our head of research was tasked with evaluating the applicability of LLMs to either our product or our daily workflows.

To date the consensus is that there isn’t much there for our product, that integrating LLMs into our models would introduce more problems than it would solve, and that we should cautiously experiment with allowing engineers to use tools like co-pilot, provided we take adequate steps to protect our IP.

It was a reasonable exercise carried out by a reasonable person for reasonable reasons (from my POV). I imagine this isn’t an uncommon story? Color me pessimistically optimistic?

For practical reasons we need to have an answer to the buzzword bingo when communicating with customers/company ownership, and now we do. Now we don’t talk much about it because there isn’t much to talk about.

Refreshing but rare; usually this kind of eval gets done by someone excited to do it because they’ve already been “intellectually captured” by the hype.

> raise so many… emotions in me… and I now believe (faking gels) is a common occurrence

On the other hand, shysters always project, and this thread is full of cringe vindications about cheating or faking or whatever. As your “emotions” are probably telling you, that kind of generalization does not feel good, when it is pointed at you, so IMO, you can go and bash your colleagues all you want, but odds are the ones who found results did so legitimately.

Re: the role of (gel) images as the key aspect of a publication. To me this is very understandable, as they convey the information in the most succinct way and also constitute the main data & evidence. Faking this is so bold that it seemed unlikely.

The good news IMO: more recent MolBio methods produce data that can be checked more rigorously than a gel image. A recent example where the evidence in form of DNA sequencing data is contested: https://doi.org/10.1128/mbio.01607-23

Similar story: computational biologist, my presentations involved statistics so people would come to me for help, and it often ended in the disappointing news of a null result. I noticed that it always got published anyway at whichever stage of analysis showed “promise.” The day I saw someone P-hack their way to the front page of Nature was the day I decided to quit biology.

I still feel that my bio work was far more important than anything I’ve done since, but over here the work is easier, the wages are much better, and fraud isn’t table stakes. Frankly in exchange for those things I’m OK with the work being less important (EDIT: that’s not a swipe at software engineering or my niche in it, it’s a swipe at a system that is bad at incentives).

Oh, and it turns out that software orgs have exactly the same problem, but they know that the solution is to pay for verification work. Science has to move through a few more stages of grief before it accepts this.

What’s amazing to me is that journals don’t require researchers to submit their raw data. At least, as far as I know.

The only option for someone who wants to double check research is to completely replicate a study, which is quite a bit more expensive than double checking the researcher’s work.

Journals are incentivized to publish fantastic results. Organizing raw data in a way that the uninitiated can understand it presents serious friction in getting results out the door.

The organizations who fund the research are (finally) beginning to require it (0)(1), and some journals encourage it, but a massive cultural shift is required and there will be growing pains.

You could also try emailing the corresponding authors. Any good-faith scientist should be happy to share what they have, assuming it’s well organized/legible.

(0) https://new.nsf.gov/public-access
(1) https://sharing.nih.gov/

I’m mostly out now, but I would love to return to a more accountable academia. Often in these discussions it’s hard to say “we need radical changes to publicly funded research and many PIs should be held accountable for dishonest work” without people hearing “I want to get rid of publicly funded research altogether and destroy the careers of a generation of trainees who were in the wrong place at the wrong time”.

Even in my immediate circles, I know many industry scientists who do scientific work beyond the level required by their company, fight to publish it in journals, mentor junior colleagues in a very similar manner to a PhD advisor, and would in every way make excellent professors. There would be a stampede if these people were offered a return to a more accountable academia. Even with lower pay, longer hours, and department duties, MORE than enough highly qualified people would rush in.

A hypothetical transition to this world should be tapered. But even at the limit where academia switched overnight, trainees caught in such a transition could be guaranteed their spots in their program, given direct fellowships to make them independent of their advisor’s grants, given the option to switch advisor, and have their graduation requirements relaxed if appropriate.

It’s easy to hem and haw about the institutional knowledge and ongoing projects that would invariably be lost in such a transition, even if very carefully executed. But we have to consider the ongoing damage being done when, for example, biogen spends thousands of scientist-years and billions of dollars failing to make an alzheimers drug because the work was dishonest to begin with, or when generations of trainees learn that bending the truth is a little more OK each year.

I’m assuming /s above.

Because the amount of pencil-whipped “peer review” feedback I’ve received could fit in a shoe box, because many “reviewers” are looking for the CV credit for their role and not so much the actual effort of reviewing.

And there’s no way to call them out on their laziness except maybe to not submit to their publication again and warn others against it too.

And, to defend their lack of review, all they need to say to the editor anyway is: “I didn’t see it that way.”

To me, at the time, successful would have been getting a tenure-track position at a Tier 1 university, discovering something important, and never publishing anything that was intentional fraud (I’m OK with making some level of legitimate errors that could need to be retracted).

Of those three, I certainly didn’t achieve #1 or #2, but did achieve #3, mainly because I didn’t write very much and obsessed over what was sent to the editor. Merely being a non-fraud is only part of success.

(note: I’ve changed my definition of success, I now realize that I never ever really wanted to be a tenured professor at a Tier 1 university, because that role is far less fulfilling that I thought it would be).

Most often #1 is sought after as the prerequisite for achieving #2. And due to the structural factors on the number of positions available, funding available, and supply of new PhDs and postdocs, it’s most often a really good idea to avoid #1 these days.

PhDs and postdocs aren’t fungible. The ones worth working with go to the same top 20-40 programs in their field. Even finding PhDs that have the bare minimum qualification of “caring about the field” can be tough as there are all sorts of weird incentives pushing people towards PhDs. Applies for most other things in science as well. Number of papers has greatly increased. Number of papers worth reading has not increased nearly as much.

The older I get the more sympathy I have for people who claim they didn’t become traditionally successful due to adhering to ethical principles.

I used to think that was just cope. But now I’ve been in a few situations where there’s fairly clear opportunities to profit from shady behavior, and seen colleagues that I formerly respected jump at the chance.

Thanks for not being a fraud.

> That is not enough to most people.

You absolutely nail the most profound pathology of our time. Being a
decent, honest, smart, hardworking person is just “not enough” any
more. We’ve created a normatively criminal society.

Of course. We’ve always had criminals. And as you say, throughout
history people have complained about it. All I’m saying is that today
people positively celebrate it, and – according to the parent poster –
we now demand criminality as a necessity for “success”.

Just looking at capitals of many countries, lots of monuments (statues, etc.) are dedicated to conquerors that were basically celebrated for successfully killing, subjugating and stealling from the others.

There are still people that have success without anything criminal.

This is why institutions break down in the long run in any civilization. People like you, people of principle are drown out my agents acting exclusively in their own interest without ethics.

It happens everywhere.

The only solution to this is skin in the game. Without skin in the game the fraudsters fraud, the audience just naively goes along with it, and the institution collapses under the weight of lies.

The iron laws of bureaucracy are:

1) Nothing else matters but the budget

2) Over the long run, the people invested in the bureaucracy always win out over the people invested in the mission/point

Science is just as susceptible to 2) as anything else.

Don’t hate the player hate the game. Governments made scientist only survive if they show results and specifically the results they want to see. Otherwise no anymore grants and you are done. Whether the results are fake or true does not matter

“Science” nowadays is mostly BS, while the scientific method (hardly ever used in “science” nowadays) is still gold.

Do hate the player. People are taught ethics for a reason: no set of rules and laws are sufficient to ensure integrity of the system. We rely on personal integrity. This is why we teach it to our children.

When everything is a commodity (nothing runs outside of the market economy), the incentives are skewed to this type of behavior.

‘Hate’ the player’ and ‘hate’ the game.

Some things shouldn’t be part of the market economy – education, health and food.

This is exactly what personal integrity is about. You make the right choices specifically and only because they are right. And they are hard choices because you are forgoing immediate gain.

Time favors integrity, and a lack of integrity is usually punished. Sometimes at the individual level, as you age and see the error of your ways. Sometimes at the group level, as you watch your community suffer.

“You reap what you sow.”

Most of them become distinguished in academia, and only a few get punished if they are too blatant or piss off too many people (see recent ivys losing their presidents over academic fraud).

> If people can get away with it, they will do it.

This is not universally true and individuals and societies don’t have to be organized this way.

Why are streets in some countries filled with trash when others are clean? My community does not have anyone policing littering – yet our streets, parks and public areas are litter free.

> If people can get away with it, they will do it.

This isn’t true of everyone, but assuming it is increases the likelihood that it will become so. Because if everyone is trying to get away with it, why shouldn’t I? That sort of breakdown in trust is high up on my list of worrying societal failure modes.

I’m not sure there are any rational arguments for Christianity. I say that as a practicing Christian. Either it meets a spiritual need in you, or it’s not very valuable. I imagine that belief in a God who punishes evildoers has kept some people honest throughout history, but the value of that is surely outweighed by the evil done in the name of that God.

I also don’t believe Christian societies are more honest than others. Every religion I know of teaches honesty, as does every non-religious ethical framework I can think of.

People doing bad things, including Christians, is completely in line with the Christian teachings of original sin, the fall and concupiscence. It is human nature to do bad things and it is very difficult to over come this behavior.

If I read pagan Roman observations about life and people, they strike me as way, way more honest (sometimes brutally honest) than anything that we are used to for the last 1000 years, perhaps with exceptions like Machiavelli and some verbal jokes of the “unprintable” character.

In Christian theory, everyone is a sinner, but in a real Christian society, people try to cover up their particular sins all the time, at least against other laypeople (not the priest), which leads to the opposite of honesty – hypocrisy.

Reminds me of this tweet that calls out the problem of the popular position “science is real”

“Science isn’t real – that’s terrible epistemology. It’s a process or method to generate and verify hypotheses and provisional knowledge, using replicable experiments and measurements. We don’t really know the real – we just have some current non-falsified theories and explanations that fit data decently, till we get better ones. The “science is real” crowd generally haven’t done much science and take it on faith.”

(1) https://x.com/rao_hacker_one/status/1811295722760982939

It’s probably better to say that engineering based on science is usually real. Engineering cares a lot less about falsifying theories and more on what existing theories seem to have general predictive value, and can be used to do stuff, including things where human lives are at risk (tall buildings, fire reduction materials, airplanes). And if there are failures out in the field, they’re inspected, and those results are fed back to update both the practices, and the theories.

Personally I believe we live in an objective universe that can be understood by human brains (possibly using AI augmentation) and that our currently most advanced experimentally verified theories correspond to some actual true aspect of our universe. In that sense, science is real when the current theories match those aspects well enough to make generalizable predictions (general relativity and quantum mechanics).

I think you’ve hit on an important point. Science isn’t about finding absolute truth, but rather about generating testable hypotheses that can be validated through experimentation and observation. This is why it’s so crucial for scientists to follow the scientific method – they need to be willing to revise their theories based on new evidence. Your comparison between science and engineering is a good one. Engineering is often more focused on practical application, whereas science is more about understanding the underlying mechanisms that govern our world.

The intent of science is to find absolute truth. It’s just that the mechanism by which we do so typically involves demonstrating that a finding isn’t absolutely true. And we also lack the epistemological confidence to say that what we’re observing represents the absolute truth, or that the idea of absolute truth is meaningful.

You have agency. Yes – the system provides incentives. However, you are not some pass-through nothingness to just accept any incentives. You can chose to not accept the incentives. You can leave the system. You’re lucky – it’s not a totalitarian system. There will be another area of life and work where the incentives align with your personal morals.

Once you bend your spine and kneel to bad incentives – you can never walk completely upright again. You may think and convince yourself that you can stay in the system with bad incentives, play the game, but still somehow you the player remain platonically unaffected. This is a delusion, and at some level you know it too.

Who knows? If everyone left the system with bad incentives, it maybe that the bad system collapses even. It’s a problem of collective action. The chances are against a collapse, that it will continue to go on for some time. So don’t count on collapse. And even if one was to happen in your time, it will be scorched earth post collapse for some time. Think as an individual – it’s best to leave if you possibly can.

You are clearly deeply disconnected from the actual practice of research.

The best you can really say is that the statistics chops of most researchers is lacking and that someone researching say caterpillars is likely to not really understand the maths behind the tests they’re performing. It’s not an ideal solution by any means but universities are starting to hire stats and cs department grads to handle that part.

> At the time (20+ years ago) it didn’t occur to me that anybody would intentionally modify images of gels to promote the results they claimed

Fraud I suspect is only tip of the iceberg, worse still is delusion that what is taught is factually correct. A large portion of mainstream knowledge that we call ‘science’ is incorrect.

While fraudulent claims are relatively easy to detect, claims that are backed up by ignorance/delusion are harder to detect and challenge because often there is collective ignorance.

Quote 1: “Never ascribe to malice that which is adequately explained by incompetence”

Quote 2:”Science is the belief in the ignorance of experts”

Side note: I will not offer to back up my above statements, since these are things that an individual has to learn it on their own, through healthy skepticism, intellectual integrity and inquiry.

> A large portion of mainstream knowledge that we call ‘science’ is incorrect.

How do you know that? Can you prove it scientifically?

> claims that are backed up by ignorance/delusion

In that case, they are not “backed up”

> I will not offer to back up my above statements

> an individual has to learn it on their own, through … inquiry

May I “inquire” about your reasoning?

owenpalmer, as I stated in my side note I will not be explaining myself, my sincere apologies for this.

You see there are some things that cannot be taught, or it cannot be easily taught more so for adults i.e healthy skepticism and questioning of statements/facts that come from authority figures ( example:science/religion etc). A person will have to make his or her own effort. External influences especially debates from the heretics will generally only delay that progress.

Now, if you’re already open to the possibility that mainstream science could be very wrong, I can possibly nudge you in the correct direction. I have explored certain areas of biology (especially nutrition) but not all, each sub area of biology (or any of the sciences really) is vast, I have to rely on heretics who are ‘experts’ in their areas of specialization.

There’s two approaches here:

Pick one phenomenon in the world that you observe and don’t have an account for, and try to come up with an account, assuming nothing except for your own observation and experimentation, of the causes of the phenomenon. Once you’re done, follow the trail, reading only original or translated original documents, of the history of human descriptions of the phenomenon, do the “science” of “science” by observing the phenomena of observing and describing phenomena.

Go in the woods and read Plato and Aristotle and Sophocles for a year.

😀

No quite, It really depend on what you mean by ‘research’. For most people ‘research’ is just consensus of the experts. On the other hand if by research you mean “test it out yourself” I agree with you. ( not always practical though, so you have to choose a middle ground)

Science sent us to the moon. “Do your own research” sent millions to their graves.

“Do your own research” is a movement that is fraught with grifting and basically foundationally just fraud to the core.

“Science” definitely has some fraudsters, but remains the best institution we have in the search for truth.

I’m the furthest thing from a scientist unless you count 3,000 hours of PBS spacetime, but I love science and so science/academia fraud to me, feels kinda like the worst kinda fraud you can commit. Financial fraud can cause suicides and ruin in lives, sure, but I feel like academic fraud just sets the whole of humanity back? I also feel that through my life I’ve (maybe wrongly) placed a great deal of respect and trust in scientists, mostly that they understand that their work is of the upmost importance and so the downstream consequences of mucking around are just too grave. Stuff like this seems to bother me more than it rationally should. Are people who commit this type of science fraud just really evil humans? Am I over thinking this? Do scientists go to jail for academic fraud?

Pick up an old engineering book at some point, something from mid 1800’s or early 1900’s and you’ll quickly realize that the trust people put on science isn’t what it should be. The scientific method works over a long period of time, but to blindly trust a peer review study that just came out, any study, is almost as much faith as religion, specially if you’re not a high level researcher in the same field and have spent a good amount of time reading their methodology yourself. If you go to the social sciences then the amount of crock that gets published is incredible.

As a quick example, any book about electricity from the early 1900’s will include quite serious sections about the positive effects of electromagnetic radiation (or “EM field therapies”), teach you about different frequencies and modulations for different illnesses and how doctors are applying them. Today these devices are peddled by scammers of the same ilk as the ones that align your shakras with the right stone on your forehead.

Going to need some citations here since the texts that I’m familiar with from that time period are “A Treatise on Electricity and Magnetism” by Maxwell (mid-late 1800s) and “A History of the Theories of Aether and Electricity” by E. T. Whittaker, neither of which mentions anything of the sort. I suspect you are choosing from texts that at the time likely would not have been considered academic or standard.

Your points of memory are not counterpoints.
Those are the ones that lived – and are not indicative of the general quality of science during those times. Obvious survivor bias.

The fact that you can recall those reinforces the point that the value is determined by how long it is useful and remembered, not the fact that it was published.

Indeed, but you are clearly missing the historical context as these were two highly celebrated and referenced texts of the time period by leading scientists. However, it appears that the leading scientific minds (of which Maxwell and Whittaker are) did not include these uses in their texts. I do not dispute that science can be wrong (in fact it is almost always ‘wrong’ in the end) nor do I dispute that there could have been published research in those applications. I would argue that these applications were likely fringe at best within the scientific community by the mid 1800s.

There are of course incredible scientists that went down disappointing paths (eg Shockley, Dyson, Pauling) in terms of their research output later on, though one must remember that typically this occurs outside their original field of expertise.

If you read my comment you will see that I am asking for the references to the claims the previous author made. I simply provided my own references which werew written at the time and are representative of the times that do not corroborate the tall tale of the previous author. If you have any references to support their claim I would be interested in perusing them.

And what’s to say that other highly celebrated and highly referenced texts from that time were not based on bad science or were outright frauds? Your memory of them?

Picking the winners as examples is not good sampling.

The originator explicitly said that ‘any engineering book’ would contain these references, thus it would seem that this was at least a widespread belief among physicists and engineers at the time. Do you have any example?

Again, you and the original poster seem to have this understanding that scientists and engineers from the mid 1800s to early 1900s are not to be trusted. I think that this assertion should be backed by considerable evidence, and that burden is of course mostly on you

I don’t dispute that there were doctors applying electricity and/or magnetism to the body in an “un”-rigorous manner, but is there documentation that suggests that the scientists at the time had come to the conclusion that it worked?

Also notably, Whittaker’s work was a ‘loser’. I chose it specifically for this purpose. I had read parts of it previously because it was a ‘loser’ as he chose to dispute Einstein’s contributions to special relativity.

Because those two texts are the two among literally thousands of scientific publications that have survived the test of time, which is exactly the point being made.

This might seem crazy to hear now, but when Maxwell first published A Dynamical Theory of the Electromagnetic Field in 1865, no one cared, it received very little attention at the time.

It was decades later in 1888 with the work of Hertz that Maxwell’s equations started to gain significance within the scientific community.

It seems convenient that the evidence to corroborate the claim can’t be found yes?

I think you will also find that the publications of the 1800-1900s are quite well preserved.

Here’s a more recent (1950) example that I think makes parent’s point quite well:

> I assume that the reader is familiar with the idea of extra-sensory perception, and the meaning of the four items of it, viz. telepathy, clairvoyance, precognition and psycho-kinesis. These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming. It is very difficult to rearrange one’s ideas so as to fit these new facts in. Once one has accepted them it does not seem a very big step to believe in ghosts and bogies. The idea that our bodies move simply according to the known laws of physics, together with some others not yet discovered but somewhat similar, would be one of the first to go.

Anyone on this site who doesn’t know what this is from should feel a bit of shame in the current era of hype around machine intelligence, so I’ll leave it as an exercise to the reader if you aren’t already familiar with this paper.

Alan Turing was an incredible computer scientist and mathematician.

Unfortunately he is out of his area of expertise in physics and human biology/neuroscience (? not sure where telepathy would be if it was to be rigorously studied). This is akin to Freeman Dyson on global warming.

That scientists can have strange ideas is something nobody can dispute. That those strange ideas enter into scientific legitimacy is another story entirely.

The point is that I don’t believe Turing’s ideas were widely considered strange at the time. The point is more that, even under conditions of honest actions, it’s very easy for educated, smart, and sincere thinkers to take for fact something that with time we believe is wildly not fact.

Science even at it’s most sincere should always be approached with thoughtful skepticism. The phrase that I hear touted often these days “trust the science”, is in essence not how science should be thought of.

There is a difference between “not considered strange at the time” and “science” via scientific publication and subsequent consensus validating the idea. I have mentioned that luminaries can have odd ideas multiple times in this thread, it’s not something I seek to deny. However, as I continue to reiterate, these ideas are generally:

1. outside their areas of expertise

2. not validated by independent scientific research

I completely agree that science should be approached with thoughtful skepticism, and I agree that ‘trust the science’ might not necessarily be the best semantics to use. However, it is not clear that skepticism by all parties should be considered with equal weight. Most of the times, people should “trust the science” because they are not equipped to be skeptical.

We use EM radiation for illnesses and doctors apply them. It’s one of the most important diagnostic and treatment options we have. I think what you’re referring to is invalid therapies (“woo” or snake oil or just plain ignorance/greed) but it’s hard to distinguish those from legitimate therapies at times.

> We use EM radiation for illnesses and doctors apply them.

Do you have examples of usage as a treatment? I can only think of rTMS (whose effectiveness is contentious).

The most boring example is x-rays. Slightly less boring are the radiation therapies for cancer.

What is maybe the most applicable that is widely accepted is electric therapy for people recovering from ACL surgeries.

Gamma knife? Basically the entire field of radiotherapy?
TMS is magnetic, not EM (the coil generates a magnetic field, which induces localized currents in the body being treated)

>TMS is magnetic, not EM (the coil generates a magnetic field, which induces localized currents in the body

ME? the coil generates an M which induces localized E in the body as shown by localized currents? (which produce some more M, but only just enough)

OP is making a distinction between “EM Radiation” (i.e. “light”) and “Quasistatic fields”.

This is warranted because they are pretty different – light has a frequency distribution, diffracts, etc, and can be focused to propagate energy over distances large compared to its source, whereas quasistatic fields (by definition) have no frequency distribution, and die as 1/r^2 or faster

I can’t parse what you are saying, but there’s a difference between EM radiation and a magnetic field (and the resulting locally induced currents).
Think in terms of an MRI machine: it puts you in a giant magnet (causing the various nuclear spins to align with the field) and then sends a bunch of EM radiation (radiofrequency). The former is a magnetic field, not EM radiation.

the best example is psychology. the entire field needs to be scrapped and started over, nothing you read on any of those papers can be trusted, it’s just heaping piles of bad research dressed with a thin veil of statistical respectability.

I think the error is putting trust in scientists as people, instead of putting trust in science as a methodology. The methodology is designed to rely on trusting a process, not trusting individuals, to arrive at the truth.

I guess it also reinforces the supreme importance of reproducibility. Seems like no research result should be taken seriously until at least one other scientist or group of scientists are able to reproduce the result.

And if the work isn’t sufficiently defined to the point of being reproducible, it should be considered a garbage study.

There is no way to do any kind of science without putting trust in people. Science is not the universe as it is presented. Science is the human interpretation of observation. People are who carry out and interpret experiments. There is no set of methodology you can adopt that will ever change that. “Reproducibility” is important, but it is not a silver bullet. You cannot run any experiment exactly in the same way ever.

If you have independent measurements you cannot rule out bias from prior results. Look at the error bars here on published values of the electron charge and tell me that methodology or reproducibility shored up the result. https://hsm.stackexchange.com/questions/264/timeline-of-meas…

TFA is about a person who literally faked the observations. Everyone on this sub is trying to shoehorn in their preferred view of “how to fix science” when the problem here has nothing to do with any of it.

The way I sum it up is: science is a method, which is not equivalent to the institution of science, and because that institution is run by humans it will contain and perpetrate all the ills of any human group.

I’d say the public needs to develop some rational impulse, it already has plenty of skepitism to the point where people no longer trust science the methodology. Instead, they genuinely believe there is some alternative to finding the truth, and now simply believe the same old superstitions and bunk that people have prior to the scientific revolution.

Speaking of Orwell, I don’t think science comes into it. Rather, when people stop believing in democracy, things will degenerate into authoritarianism. It’s generally pretty hard to use science the methodology to implement an authoritarian government as the scientific method by definition will follow the evidence, not the will of a dictator.

However, something that looks like science but isn’t could be used, especially if the public doesn’t understand science and thus can’t spot things that claim to be science but don’t actually follow the scientific method.

Critical thinking = the ability to be skeptical, literally it is the ability to criticize.

Great critical thinkers become lawyers, post modernist intellectuals, and other parts of the “talking” class of intellectuals. Unfortunately, it’s far easier to talk shit than it is to build things. We’ve massively over-valued critical thinking over constructive thinking.

Most people want to dunk on science. Few people want to submit their own papers to conferences. Many people act like submitting papers is impossible for non-Ph.D’s. We have a lack of constructive oriented thinking.

>Many people act like submitting papers is impossible for non-Ph.D’s.

I agree. But academia reinforces this perception. I feel like PhD’s only give serious consideration to the utterances of other PhD’s. The rest of the public consists of the unwashed masses, and at best gets the smiling-nod treatment from teh PhD.

PS I (a non-PhD) managed to publish a paper during the pandemic (doi: 10.3389/fphar.2022.909945 ). One of the biggest barriers was the item you mentioned quoted above, and the bogeyman of “epistemic trespass” in general, as operating in my own psychology. I’ve since become noisy in advocating for the #DeSci movement.

> I’d say the public needs to develop some rational impulse, it already has plenty of skepitism to the point where people no longer trust science the methodology.

Methodologies are inanimate – I may trust that a methodology is fine, but once Humans become involved I do not trust.

> Instead, they genuinely believe there is some alternative to finding the truth

There are several alternate means, the field of philosophy (that birthed science) has been working on such problems for ages, and has all sorts of utility, just sitting there waiting to be used by Humanity.

> and now simply believe the same old superstitions and bunk that people have prior to the scientific revolution.

Not possible for you to know, unless there are indeed forms of supernatural (beyond current scientific knowledge) forms of perception.

> Rather, when people stop believing in democracy, things will degenerate into authoritarianism.

Once again, not possible for you to know.

> It’s generally pretty hard to use science the methodology to implement an authoritarian government

COVID demonstrated that to be incorrect.

> as the scientific method by definition will follow the evidence, not the will of a dictator.

Incorrect. Something defined to be true necessarily being true only works in metaphysics, such as linguistics.

And again, the scientific method is inanimate.

> However, something that looks like science but isn’t could be used, especially if the public doesn’t understand science and thus can’t spot things that claim to be science but don’t actually follow the scientific method.

On a scale of 1 to 10, how comprehensively and accurately do you believe you understand science?

How does this work for things like COVID vaccines, where waiting for a reproduction study would leave hundreds of thousands dead? Ultimately there needs to be some level of trust in scientific institutions as well. I do think placing higher value on reproducibility studies might help the issue somewhat, but I think there also needs to be a larger culture shift of accountability and a higher purpose than profit.

I believe if we taught philosophy in school to a non-trivial level we wouldn’t have to rely on trust/faith.

I wonder if it’s possible to get people to wonder why the one discipline that has the tools to deal with all of these epistemic, logical, etc issues isn’t taught in school. You’d think it would be something that people would naturally wonder about, but maybe our fundamentalist (and false) focus on science as the one and only source of knowledge has damaged our ability to wonder independently.

Suppose you need to make a decision on a topic that’s contingent on P being true, which someone has already tested. How would you go about making the decision without testing P yourself (because that would mean that you would have to do the same for every decision in your life)?

I think it is fine to put some trust into concrete individual scientists who have proven themselves reliable.

It is not fine to put trust into scientists in general just because they walk around in a lab coat with a PhD label on its front.

You’re far from a scientist, so it’s easy for you to put scientists/academia on a pedestal.

For most of the people who end up in these scandals, this is just the day job that their various choices and random chance led up to. they’re just ordinary humans responding to ordinary incentives in light of whatever consequences and risks they may or may not have considered.

Other careers, like teaching, medicine, and engineering have similar problems.

In my view, prosecuting the bad actors alone will not fix science. Science is by its own nature a community because only a small number of people have the expertise (and university positions) to participate. A healthy scientific discipline and a healthy community are the same thing. Just like the “tough on crime” initiative alone often does not help a problematic community, just punish scientific fraud harshly will not fix the problem. Because the community is small, to catch the bad actors, you will either have insiders policing themselves, or have an non-expert outsiders rendering judgements. It’s easy for well-intention-ed policing effort to turn into power struggles.

This is why I think the most effective way is to empower good actors. Ensure open debate, limit the power of individuals, and prevent over concentration of power in a small group. These efforts are harder to implement than you think because they run against our desire to have scientific superstars and celebrities, but I think they will go a long way towards building a healthy community.

I agree with you, science fraud is terrible. It pollutes and breaks the scientific method. Enormous resources are wasted, not just by the fraudster but also by all the other well meaning scientists who base their work on that.

In my experience no, most fraudsters are not evil people, they just follow the incentives and almost non-existent disincentives.
Scientist has become just a job, you find all kinds of people there.

As far as I know no-one goes to jail, worst thing possible (and very rare) is losing the job, most likely just the reputation.

“most fraudsters are not evil people, they just follow the incentives and almost non-existent disincentives”

Maybe I’m too idealistic but why does following incentives with no regard for secondary consequences not evil?

I have the displeasure of having acquaintances that have done some pretty bad things, of the fraud and bribery persuasion. They did so because they had no regard of the secondary cosequences. However, this didn’t mean ‘I understand this horrible secondary consequence is going to happen, but I don’t care’. That would be evil. Instead, it’s more common to not dedicate an iota of time at thinking of possible negative effects at all.

You’ll see this all over risky startups. What starts as hopeful optimism only becomes fraud over time, when the consequences of not committing fraud also seem horrible. It’s easy to follow the road until all your choices are horrible in different ways, and they pick the one better for the people around them, yet worse for everyone else.

IMO “evil” is a misconception. People have different beliefs and psychological needs, and placed in certain incentive structures that has the outcomes that we see. You can call certain behaviors “evil”, but that doesn’t explain anything about why the behaviors occur.

Nope. “Evil” still provides no explanation and no understanding of why and how things happen there. It’s the same thing as believing in miracles created by a god.

The context here is from the root comment: “Are people who commit this type of science fraud just really evil humans?”. “Just really evil” implies that that there is no other explanation, and that the fraud is committed as a function of them being “really evil”.

I don’t actually know what people mean when they label someone as “evil”, other than “is doing/saying/thinking stuff I find very reprehensible”. Which doesn’t make sense when you insert it into the above statement: “Are people who commit this type of science fraud just humans who do stuff I find really reprehensible?” Well, I guess it sounds like they are.

It seems like people want to assign a character trait when they say “person X is evil”, but I don’t believe such a generic character trait exists (and what exactly it is supposed to mean if it existed). What’s worse, it obfuscates and prevents understanding the actual character traits and circumstances that lead to the respective behavior.

What is a general definition of “evil” that one could derive this from? And how does this relate to the actual reasons why someone would inflect physical pain? Are soldiers in a war evil when they happen to inflict physical pain outside of self defense? Or is that another “common-sense” exception?

The concept is emotionally laden and ill-defined, and has little relation to why the designated behaviors actually happen. It’s an incoherent concept that has no explanatory power.

Exactly. In fact, all things in the universe are subjective except exactly one thing, which is that all other things are subjective. This is epistemological monism, and it’s the only coherent view.

Socrates got it. “I know that I know nothing” (else)

> Stuff like this seems to bother me more than it rationally should.

It’s bothering you a rational amount, actually. These people have done serious damage to lots of lives and humanity in general. Society as a whole has at least as much interest in punishing them as it does for financial fraudsters. They should burn.

As a scientist, I agree, although for not quite the reason you gave. Scientists are given tremendous freedom and resources by society (public dollars, but also private dollars like at my industry research lab). I think scientists have a corresponding higher duty for honesty.

Jobs at top institutions are worth much more than their nominal salary, as evidenced by how much those people could be making in the private sector. (They are compensated mostly in freedom and intellectual stimulation.) Unambiguously faking data, which is the sort of thing a bad actor might do to get a top job, should be considered at least as bad a moral transgression as stealing hundreds of thousands or perhaps a few million dollars.

(What is the downside? I have never once heard a researcher express feeling threatened or wary of being falsely/unjustly accused of fraud.)

There was a period of time when science was advanced by the aristocrats who were self funded and self motivated.

Once it became a distinguished profession the incentives changed.

“When a measure becomes a target, it ceases to be a good measure”

> There was a period of time when science was advanced by the aristocrats who were self funded and self motivated.

From a distance the practice of science in early modern and Enlightenment times might look like the disinterested pursuit of knowledge for its own sake. If you read the detailed history of the times you’ll see that the reality was much more messy.

Today we only remember the great thinkers of these times, and tend to see a linear accumulation of knowledge. If you look at the history of the times you realise that at the time there was a vast and confusing babble, it was very hard at the time to distinguish the valid science from the superstition, the blind regurgitation of classical authority, the soothsayers and yes, the fraudsters.

For example Kepler considered his work on the Music of the Spheres (google it) to be more important than, and the ultimate goal of, his research on the mechanics of planetary motion. Newton dabbled in alchemy, and his dispute with Leibnitz was very very bitchy with some dubious jostling for priority. And there was no end of dubious research and outright fraud going on at the time. So no, it was not a golden era of disinterested research.

See for example the wikipedia articles on Phlogiston, The Music of the Spheres, the long and hard fought battle over Epicycles etc

Not the OP, but I remember reading about many twists and turns on the road to various inventions described in Matt Ridley’s “How Innovation Works”. I personally like “Happy Accidents. Serendipity in Major Medical Breakthroughs in the Twentieth Century” by Morton Meyers.

It seems like this could ultimately fall under the category of financial fraud, since the allegations are that he may have favorably misrepresented the results of drug trials where he was credited as an inventor of the drug that’s now worth hundreds of millions of dollars.

Generally, the fields that have a Nobel in them attract the glory hounds and therefore the fraudsters. The ones that don’t, like geology or archeology for example, don’t get the glory hounds.

Anytime you see champagne bottles up on a professor’s top shelf with little tags for Nature publications (or something like that), then you know they are a glory hound.

When you see beer bottles in the trash, then you know they’re in it for more than themselves.

As a collective endeavor to seek out higher truth, maybe some amount of fraud is necessary to train the immune system of the collective body, so to speak, so that it’s more resilient in the long-term. But too much fraud, I agree, could tip into mistrust of the entire system. My fear is that AI further exacerbates this problem, and only AI itself can handle wading through the resulting volume of junk science output.

This is pretty funny. I usually hear this kind of language when a religious person is so devastated when their priest or pastor does something wrong that it causes them to leave their religion altogether. Are you going to do the same thing for scientism?

It is the same flavor of fraud as financial fraud. It is about personal gain, and avoiding loss.

This kind of fraud happens because scientists are rewarded greatly for coming up with new, publishable, interesting results. They are punished severely for failing to do that.

You could be the department’s best professor in terms of teaching, but if you aren’t publishing, your job is at risk at many universities.

Scientists in Academia are incentivized to publish papers. If they can take shortcuts, and get away with it, they will. That’s the whole problem, that’s human nature.

This is why you don’t nearly as many industry scientists coming out with fraudulent papers. If Shell’s scientists publish a paper, they aren’t rewarded for that, if they come up with some efficient new way to refine oil they are rewarded, and they also might publish a paper if they feel like it.

> If Shell’s scientists publish a paper

A lot of companies reward employees for publications. Mine certainly does. Also an oil company may not be such a great example since they directly and covertly rewarded scientists for publishing papers undermining climate change research.

Ok. Bad example using Shell, but the point is that Industry scientists do not have publication in journals as a primary incentive.

A scientist can work in industry research and NEVER publish, and still have a career where they make money, and don’t worry about losing their job.

Scientific fraud can also compound really badly because people will try to replicate it, and the easiest results to fake are usually the most expensive…

Can you go to jail for knowingly defrauding another entity out of money (such as grants). Yes. Absolutely.

Are you going to go to jail for fudging some numbers on your paper, not likely.

It’s complicated. Historically scientific fraud could be construed as ‘good-intentioned’ – typically a researcher in a cutting edge field might think they understood how a system worked, and wanting to be first to publish for reasons of career advancement, would cook up data so they could get their paper into print before anyone else.

Indeed, I believe many academic careers were kicked off in this manner. Where it all goes wrong is when other more diligent researchers fail to reproduce said fraudulent research – this is what brought down famous fraudster Jan Hendrik Schön in the field of plastic-based organic electronics, which involved something like 9 papers in Science and Nature. There are good books and documentaries on that one. This will only be getting worse with AI data generation, as most of those frauds were detected by banal data replication, obvious cuts and pastes, etc.

However, when you add a big financial driver, things really go off the rails. A new pharmaceutical brings investors sniffing for a big payout, and cooking data to make the patentable ‘discovery’ look better than it is is a strong incentive to commit egregious fraud. Bug-eyed greed makes people do foolish things.

Evil is a much simpler explanation than recognizing that if you were in the same position with the same incentives, you would do the same thing. It’s not just one event, it’s a whole career of normalizing deviation from your values. Maybe you think you’d have morals that would have stopped you, maybe those same morals would have ensured you were never in a position to PI research like that.

I also watched almost all episodes of PBS Spacetime. Some of them multiple times. I’m so happy that Spacetime exists and also that Matt was recruited as a host (in place of Gabe). Highly recommended channel, superb content!

This sort of behavior is only going to worsen in the coming decades as academics become more desperate. It’s a prisoner’s dilemma: if everyone is exaggerating their results you have to as well or you will be fired. It’s even more dire for the thousands of visa students.

The situation is similar to the “Market for lemons” in cars: if the market is polluted with lemons (fake papers), you are disincentivized to publish a plum (real results), since no one can tell it’s not faked. You are instead incentivized to take a plum straight to industry and not disseminate it at all. Pharma companies are already known to closely guard their most promising data/results.

Similar to the lemon market in cars, I think the only solution is government regulation. In fact, it would be a lot easier than passing lemon laws since most labs already get their funding from the government! Prior retractions should have significant negative impact on grant scores. This would not only incentivize labs, but would also incentivize institutions to hire clean scientists since they have higher grant earning potential.

My recommendation is for journals to place at least equal importance to publishing replications as for the original studies.

Studies that have not been replicated should be published clearly marked as preliminary results. And then other scientists can pick those up and try to replicate them.

And institutions need to give near equal weight to replications as to original research when deciding on promotions. Should be considered every researchers responsibility to contribute to the overall field.

We can solve this at the grant level. Stipulate that for every new paper a group publishes from a grant, that group must also publish a replication of an existing finding. Publication would happen in pairs, so that every novel thing would be matched with a replication.

Replications could be matched with grants: if you receive $100,000 grant, you’d get the $100,000 you need, plus another $100,000 which you could use to publish a replication of a previous $100,000 grant. Researchers can choose which findings they replicate, but with restrictions, e.g. you can’t just choose your group’s previous thing.

I think if we did this, researchers would naturally be incentivized to publish experiments that are easier to replicate and of course fraud like this would be caught eventually.

I bet we could throw away half of publications tomorrow and see no effect on the actual pace of progress in science.

Replication is over-emphasised. Attempts to organise mass replications have struggled with basic problems like papers making numerous claims (which one do you replicate?), the question of whether you try to replicate the original methodology exactly or whether you try to answer the same question as the original paper (matters in cases where the methodology was bad), many papers making obvious low value findings (e.g. poor children do worse at school) and so on.

But the biggest problem is actually that large swathes of ‘scientists’ don’t do experiments at all. You can’t even replicate such papers because they exist purely in the realm of the theoretical. The theory often isn’t even properly written down! They will tell you that the paper is just a summary of the real model, which is (at best) found in a giant pile of C or R on some github repo that contains a single commit. Try to replicate their model from the paper, there isn’t enough detail to do so. Try to replicate from the code, all you’re doing is pointlessly rewriting code that already exists (proves nothing). Try to re-derive their methodology from the original question and if you can’t, they’ll just reject your paper as illegitimate criticism and say it wasn’t a real replication.

Having reviewed quite a lot of scientific papers in the past six years or so, the ones that were really problematic couldn’t have been fixed with incentivized replication.

So then, how on earth does this stuff even get published? What exactly is it that we’re all doing here?

If a finding either cannot be communicated enough for someone else to replicate it, or cannot be replicated because the method is shoddy, can we even call that science?

At some level I know that what I’m proposing isn’t realistic because the majority of science is sloppy. P-hacking, lack of detail, bad writing, bad methods, code that doesn’t compile, fraud. But maybe if we tried some version of this, it would cause a course correction. Reviewers, knowing that someone actually would attempt to replicate a paper at some point down the road, would be far more critical of ambiguity and lack of detail.

Papers that are not fit to be replicated in the future, whose claims cannot be tested independently, are actually not science at all. They are worth less than nothing because they take up air in the room, choking out actual progress.

> I bet we could throw away half of publications tomorrow and see no effect on the actual pace of progress in science.

Undoubtedly this is true. The problem, like with advertising, is identifying _which_ half to cut.

There would still be incentives for collusion (I “reproduce” your research, you “reproduce” mine), and researchers pretending to reproduce papers but actually not bothering (especially if they believe that the original research was done properly).

Ultimately, I’m not sure how to incentivize reproduction of research: it’s very easy to fake a successful reproduction (you already know the results, and the original researcher will not challenge you), so you don’t want to reward that too much. Whereas incentivizing failed reproductions might lead some scientists to sabotage their own reproduction efforts in ways that are subtle enough to have plausible deniability.

Proceeding by pairs is probably not enough. You probably need 5-6 replications per paper to make sure that at least one attempt is honest and competent, and make the others afraid to do the wrong thing and stand out.

You could randomize replications a bit, take away the choice. Or make it so that if you replicated one group’s result, you can’t replicate them again next time. The key is a bit of distance, a bit of neutrality. Enough jitter to break up cliques.

I don’t work in academia but in my experience professors are basically all intellectually arrogant and ego-driven, and would relish having time and space to beat each other at the brain game. A failed replication is their chance to be “the smarter guy in the room” and crack open some long-held belief. A successful replication would probably happen most of the time and be far more boring.

I could imagine, if such a thing were mandated and in place for a while, one could build her career on replications, as a prosecutor or defense. She would publish new research solely to convince her colleagues that she is sharp enough to play prosecutor or defense.

Anything has got to be better than what we have now, where apparently you can cheat and defraud your way through an entire decades-spanning career.

It is much much harder to sustain a conspiracy among many distributed people over time, than it is to fake your own research results.

Making fraud much less convenient will greatly reduce the amount of it.

I think it would be better to have separate grants for replication studies. If something becomes a mandatory administrative burden, people will see it as low-prestige work and try to avoid it. And the kind of people who are good at novel research are often also good at ignoring duties they don’t like, or completing them with a minimal effort if forced to.

But if there is separate funding for replication studies, it will become something people compete for. Some people will specialize on replicating others’ work, and universities will pay attention, as they care about grant overheads.

> But if there is separate funding for replication studies, it will become something people compete for.

It would need to be very good funding on par with what’s offered for “novel research”.

In addition, we would need increased prestige (e.g. awards, citations) for replicated studies as well for this to be effective. For many academics funding is merely a means to that end.

> I bet we could throw away half of publications tomorrow and see no effect on the actual pace of progress in science.

It might actually improve the pace of science, if the half eliminated were not replicable and the remaining half were written by researchers knowing that they would likely face a replication attempt.

It is a lot easier to just falsely prove the experiment since the data is already there and the publisher of the paper is not going to push back if you confirm it.

Why go through all the work of actually proving/disproving the experiment when you can just change tweak the numbers of the original experiment, say you actually reproduced the experiment, and then move on?

This stuff happens in Computer Science too. Back around 2018 or so I was working on a problem that required graph matching (a relaxed/fuzzy version of the graph isomorphism problem) and was trying algorithms from many different papers.

Many of the algorithms I tried to implement didn’t work at all, despite considerable effort to get them to behave. In one particularly egregious (and highly cited) example, the algorithm in the paper differed from the provided code on GitHub. I emailed the authors trying to figure out what was going wrong, and they tried to get funding from me for support.

My manager wanted me to right a literature review paper which skewered all of these bad papers, but I refused since I thought it would hurt my career. Ironically the algorithm that ended up working the best was from one of the more unknown papers, with few citations.

You should be able to build an entire career out of replications: hired at the best universities, published in the top journals, social prestige and respect. To the point where every novel study is replicated and published at least once. Until we get to that point, there will be far fewer replications than needed for a healthy scientific system.

> social prestige and respect

This one is the showstopper. No matter what you do with rules and regulations, if people aren’t impressed by it at a watercooler conversation, or when chatting at a cocktail party at a conference, or when showing a politician around in your lab then nothing else matters.

How prestigious something is is not a lever you control.

There absolutely exist skilled scientists that would happily make a living unglamorously replicating studies, if the money was there.

Prestige is a nice motivator, but making a living at all is always the baseline, and is often sufficient.

Similarly, ambitious and difficult experiments that don’t pan out should also be richly rewarded. You just did all of science the service of clearly marking that tempting path with a big “don’t bother” sign, thus saving resources and pointing the ship a little closer to the direction of truth.

The problem with putting the onus on the journals is there is no incentive for them to reward replications. Journals don’t make money on replicated results. Customers don’t buy the replication paper they just read the abstract to see if it worked or not.

I do like the idea of institutions giving tenure to people with results that have stood the test of time, but again, there is no incentive to do so. Institutions want superstar faculty, they care less about whether the results are true.

The only real incentive that I think can be targeted is still grant money, but I would love to be proved wrong.

If all that’s true, we should just shut down all the science institutions across the board. They’re worth nothing if they are not vigorously pursuing the truth about the world.

Replications are not very scientifically useful. If there were flaws in the design of the original experiment, replicating the experiment will also replicate the flaws.

What we should aim for is confirmation: a different experiment that tests the underlying phenomenon that was the subject of the first paper.

Replications don’t frequently get published but they do get attempted, because any decent researcher is going to replicate a result they rely on to build the next step. Unfortunately, you can get stuck in the mud as I did and be unable to replicate the prior findings. Is it technique or were the original results in error? We’ll never know.

Building more results without replications is what caused the psychology crisis. Apparently every lab accepted the p<0.05 results or stated correlations of prior studies and just ran more studies until they got their own that was publishable. Since everyone “knew” that the prior result was true, like priming or whatever, they could conclude anything they wanted, because ex absurdum quodlibet.

I’d be careful about that. Faking replications is even easier than faking research, so if you place a lot of importance on them, expect the rate of fraud in replication studies to explode.

This is a very difficult problem to solve.

Well of course, but I don’t think that would necessarily help much. The point is that you don’t really need to do anything: you know what the results should be, and you know you are unlikely to get pushback, so there’s only an incentive to do the strict minimum to create plausibility that you ran the experiments.

Basically, I think there is a sizable risk that a large number of replications would be fraudulent or half-assed, which dilutes their value. Paradoxically, the more this policy suppresses fraud or mistakes in original research, the less people will perform replication in good faith.

I could be wrong, but people are endlessly creative at subverting systems when the stakes are high, so I’m wary of simple solutions. To be fair, it’s probably better than the current system, just not as much as we’d like.

The incentive structures in science has been relatively stable since I entered the field in 1980 (neuroscience, developmental biology, genetics). Quality and quantity of science is extraordinary, but peer review is worse than bad. There are almost no incentives to review the work of your colleagues properly. It does not pay bills and you can make enemies easily.

But there was no golden era of science to look back on. It has always been a wonderful productive mess—much like the rest of life. As least it moves forward—and now exceedingly rapidly.

Almost unbelievably, there are far worse crimes than fraud that we completely ignore.

There are crimes associated with social convention in science of the type discussed by Karl Herrup with respect to 20 years of misguided focus on APP and abeta fragments in Alzheimer’s disease:

Book Details

This could be called the “misdemeanors of scientific social inertia”. Or the “old boys network”.

There is also an invisible but insidious crime of data evaporation. Almost no funders will fund data preservation. Even genomics struggles but is way ahead in biomedical research. Neuroscience is pathetic in this regard (and I chaired the Society for Neuroscience’s Neuroinformatics Committee).

I have a talk on this socio-political crime of data evaporation.

I wonder if there are any studies on whether fraud increased after the Bayh-Dole Act. There’s certainly fraud for prestige, that’s pretty expected. But mixing in financial benefits increases the reward and brings administrators into play.

It could also have a chilling effect on a lot of breakthrough research. If people are willing to put out what they mostly think is right, it might set back progress decades as well.

BS governmental desperation to show any “result” (even if it is fake) is what brought us here. As scientist have to show more fake results to get more grants.

Removing the government from science could help, not the other way around.

Good luck with that sentiment here.

People just went through the last five years and will go to their graves defending what they saw first hand. To admit that maybe those moves and omissions weren’t helpful would be to admit their ideology was wrong. And that can not be.

If I have learned anything over 40 years, is that the number of people who actually live in a way consistent with hypothesis testing, data collection, evidence evaluation framework required to have scientific confidence in future action or even claims is effectively zero

That includes people who consider themselves professional scientists, PhD‘s authors, leaders etc.

The only people I know who live “scientifically” consistently are people considered “neurodivergent”, along the autism-adhd-odd spectrum, which forces them into creating the type of mechanisms that are actually scientific and as required by their conditions.

Nevertheless, we should expect better from people; and on average need to do better in aligning how they think to how science, when robustly demonstrated, demonstrates with staggering predictability how the world works, compared to all other methods of understanding the universe.

The fact that the people carrying the torch of science don’t live up to the standard is expected – hence peer review.

This is an indictment of the incentives and pace at which bad science is revealed (like in this case) is always too slow, but science is the one place where eventually you’re going to either get exposed as a fraud or never followed in the first place.

There’s no other philosophy that has a higher bar of having to conform with all versions of reality forever.

The reason many people hate children is because children are not satisfied with the level of epistemology that most people can provide them, and have no compunction in saying “that answer is unsatisfactory”

Hence why institutional pedagogy is so often rote and has nothing to do with understanding – when we know science of learning says that every human craves understanding (montessori, piaget etc…)

In fact, the shortest way to break the majority of people’s brains is to ask them one of the following questions:

– Can you Explain the reasoning behind your behavior?

– How would you test your hypothesis?

– What led you to the conclusion you just stated?

– Can you clarify the assumptions embedded in your claim?

-Have you evaluated the alternatives to your position?

I shared this article with an MD/PhD friend who has done research at two of the three most famous science universities in America … and she said “this (not this guy, this phenomenon) is why I left science.”

Maybe it’s like elite running – everyone who stays competitive above a certain level is cheating, and if you want to enjoy watching the sport, you just learn to look the other way. Except that the stakes for humanity are much higher in science than in sport.

The Retraction Watch website does a good job of reporting on various cases of retractions and scientific misconduct (1).

Like many others, I hope that a greater focus on reproducibility in academic journals and conferences will help reduce the spread of scientific misconduct and inaccuracy.

(1): https://retractionwatch.com/

> It seems like a strange thing to take someone with a long and respected career and subject them to what would essentially be a Western blot and photomicrograph audit before offering them a big position.

This is absolutely something that we should routinely be doing, though.

It’s pretty similar to the level of distrust in the software engineering job interview process.

Pick your poison, to some extent. Better would be to not have to do it after-the-fact, but to vet better at every intermediate step, but it’s hard. Just a very difficult people problem.

yeah it sounds a little bit absurd to me. It’s just basic due diligence. You don’t not run a background check on a potential employee just bc their resume looks good and they got a reference. In those cases you still go, “Annoying we have to wait because we want this person on board NOW and it’s a fairly shallow investigation that 99% of the time doesn’t reveal anything even if there is something, but it’s the standard procedure.”

The amazing part about this to me is that the only reason the authors were caught is image manipulation. The fraud in numbers and text? Not so easy to uncover.

Prediction: papers stop using pictures entirely

Many journals now require all versions of a gel image that is used in a figure. So, you’d have to fake the full image that is cropped down to the lanes used in the figure. I think there aren’t as many of those raw images around to train AI on… yet.

I predict it will get even worse than that, in the next couple of decades I expect any document or work that has a substantial reward associated with it, either financially or in terms of career advancement or a grade for critical course work in one’s major, or penalty such as indictment or conviction, to be backed by a time stamped stack of developing documentation, drafts, revisions, with these time stamp validated against a trusted custodial clock and a seed random string marking the start of work, recorded in some immutable public form.

Accompanying to finish document will be a hash of all of these works along with their associated timestamps, originals of can be verified if necessary to prove a custodial chain of development over a plausible period of time and across multiple iterations of the development of the work – a kind of signed time-lapse slideshow of its genesis from blank page to finished product as if it had a mandatory and global “track changes” flag enabled from the very beginning – by which the entire process can be proved an original human-collaborated work and not an insta-generated AI fiction.

I actually thought that digital timestamps would have been a great use-case for blockchains. They are publicly available and auditable. If you’re working from hashes, you don’t necessarily need to make the raw data public, just the hash. It is a use-case that has an intrinsic value to the data generator and the future auditor (so you could charge something for it). I know there was some work done on this, but I think it lost momentum due to trying to generate crypto as a value storage medium.

The gold bugs really set back that entire field: the quasi-religious pursuit of “trustless” designs made everything more expensive, but so many problems are far more tractable with trusted third parties both for cost and reduced attack potential because institutional/professional reputations are harder to build than getting n% consensus on a cryptocurrency and don’t have the built-in bug bounty problem.

For example, imagine if university libraries ran storage systems based on Merkle trees with PKI signatures and researchers used those for their papers, code, data inventory (maybe not petabytes of data but the hashes of that data), etc. If there were allegations of misconduct you’d be able to see that whole history establishing that when things were changed and by whom, and someone couldn’t fudge the data without multiple compromised/complicit people in a completely different department (a senior figure can pressure a grad student in their field but they have far less leverage over staff at the library), and since you’re not sharing a database with the entire world you have a much easier time scaling with periodic cross checks (e.g. MIT and Caltech could cross-sign each other’s indexes periodically so you could have confidence that nobody had altered the inventory without storing the actual collection).

Eventually AI will also be able to reliably audit papers and report on fraud.

There may be newer AI methods of fraud, but it will only buy you time. As both progress, committing to record a fraud generated by technology will almost certainly be detected by a later technology.

I would guess that we’re within 10 years of being able to automatically audit the majority of papers currently published. That thought must give the authors of fraudulent papers the heebee jeebies.

The problem is that detecting fraud is fundamentally harder than generating plausible fraud. This is because ultimately a very good fraud producer can simply produce output that is identically distributed to non-fraud.

For the same reason, tools that try to detect AI-generated text are ultimately going to lose the arm’s race.

The YC company that wanted to sell fake survey results (yes they really had a launch HN with that idea) will surely be the first to sell fake science results next. YC disrupting sciences

I wonder if there’s evidence of fraud _increasing_ or if the detection methods are just improving.

In my last workplace, self-evaluation (and, therefore, self-promotion) was mandatory on a semi-annual cycle and heavily tied to compensation. It’s not surprising that it became a breeding ground for fraud. Outside of a strong moral conviction (which I would argue is in declining), these sorts of systems will likely always be targets for fraudulent behavior.

There may be a dark twist to this story.

The expose article writes:

> “UCSD neuroscientist Edward Rockenstein, who worked under Masliah for years, co-authored 91 papers that contain questioned images, including 11 as first author. He died in 2022 at age 57.”

They say nothing else about this. But looking at Rockenstein’s obituary, indications are that it was suicide. (It was apparently sudden, at quite a young age, and there are many commenters on his memorial page “hoping that his soul finds peace,” and expressing similar sentiments.)

Is there no liability for the author? There are billions of dollars wasted in drug trials and research that can be tied to this fraud. Surely they can face some legal issues due to this?

Not only are there billions of dollars wasted, there are many, many lives wasted. If the billions had gone in a direction that was actually promising, maybe there would be treatments that would have saved millions of person-years of quality lifetime. This person is basically a mass-murderer.

Like all things in life that have risks of fraud, negligence or potential failure, insurance could be the answer.

Want to publish in a peer reviewed paper? Well then your institution or you should take out a bond or insurance policy that guarantees your work is accurate. The insurance amount would fluctuate based on how big of impact this study could have. Is it a drug that will be consumed by millions? Big insurance policy. Is it a behavioral study without much risk… small insurance policy.

Is a a person in an institution found caught committing fraud, well now then all papers from that institution now have higher premiums.

Did you sign off on a peer reviewed paper that was fraud? Well now your premiums are going up also.

Insurance costs too high to publish? Well then keep doing research until the underwriters are satisfied that your work isn’t fraud and adjust the premiums down.

It adds a direct near-term economic incentive to publish honestly and punishes those that abuse the system.

In other words, you are suggesting more stringent peer review conducted by insurance companies. And because insurance companies are too small to have sufficient in-house expertise on every topic, the reviews will be usually done by external consultants. The costs might be from $10k for simple papers to hundreds of thousands for large complex papers.

The insurance model does not really work when the cost of evaluating the risks far outweighs the expected risks.

That is like saying my insurance company has to follow me around for a week while I drive before they can underwrite a policy. If there is money to be made, and money to be lost, the actuaries will find a way.

The problem could be, that it may become impossible to publish certain kinds of papers that are very well supported and valuable because no institution can afford the insurance.

You are not the first person in the world to own a home or drive a car. Insurance companies can offer you cost-effective insurance, because you are doing effectively the same things as many other people.

Science is largely about doing novel things and often being the first person in the world to try something. In order to understand the risks, you have to understand the actual research, as well as the personalities and personal lives of the people doing it.

Then there is the question of perverse incentives. Research fraud is not a random event but an intentional action by the people who take the insurance. If they manage to convince you to underwrite their research, they know that the consequences of getting caught will be less severe than without the insurance, making fraud more likely. Normally intentional fraud would not be covered by the policy, but here covering it would be the explicit purpose of the insurance.

Insurance companies insure one off events all the time. You can literally insure anything, its just a matter if the premiums outweigh what you perceive as the risk. “Uninsurable” just means the price is too high to be considered practical.

The research might be novel, but the procedures for research and publication are very similar. So insurance companies would just make sure that you followed a protocol which minimizes their risk.

perverse incentives are taken into account by insurance. Insuring someone is always a adversarial back and forth to determine if they are being truthful or not. Which is why Life insurance companies require a physical. They don’t just have you self report and then accept it as fact.

Industry professionals like lawyers and doctors carry malpractice insurance. A lawyer can still commit fraud. Insurance isn’t a black and white thing. It is a sliding scale that ties risk to a monetary value.

Its not rocket science. Just actuarial science. 😉

> The research might be novel, but the procedures for research and publication are very similar.

This is wrong.

Some time ago, I completed the checklists for publishing a paper in a somewhat prestigious multidisciplinary journal. Large parts of the lists were about complying with various best practices and formal requirements in different fields. I often didn’t even understand the questions outside my field. And the questions nominally within my field were often category errors. They assumed a mode of doing research that was far from universal. Overall, the process was more frustrating than (let’s say) applying for a US visa.

I think you are desperately trying to fit something black and white rather than thinking critically that there is a spectrum of research, some of which is similar to others which can easily have procedures for insuring and others that are more complex that require more diligence from the insurance company. Just like nearly every single thing an insurance company does.

Yes there is novel research that has never been done before? So what? That doesn’t change if you can get insurance or not. Thats a failed argument from the beginning.

Anyways you don’t seem to be having a discussion in earnest and instead you seem to be intentionally disregarding large pieces of the above arguments and trying to shoehorn in your idea that if there is unique research being done that it means that it is impossible to tell the risk of anything. Kinda silly.

The cases that would require more diligence from the insurance company are the kind of research that should be encouraged. Breakthroughs are more likely to happen when people take risks and try something fundamentally new, instead of adhering to the established forms. Your insurance model would discourage such research by making it more expensive.

Additionally, even if we assume that the insurance model is a good idea, it should be tied to individual researchers, not universities. The entire model of university research is based on loose networks of independent professionals nominally employed by various organizations. Universities don’t do research, they don’t own or control the projects, and they don’t have the expertise to evaluate research. They are just teaching / administrative organizations that provide services in exchange for grant overheads.

> that it may become impossible to publish certain kinds of papers that are very well supported and valuable because no institution can afford the insurance.

What type of research would that be? Just publish it online without insurance and everyone will treat it as it unverified and uninsured… separate from other research that is.

Once the risk of the publishing research has gone down (i.e. reputable peers approve, or the findings were replicated), the cost of the insurance goes down also.

if something is so costly to insure, there would be a reason and thus the system works.

If it is possible to advance your career by publishing uninsured research then we’ve just renamed the problem, although I do like the idea of adding this structure. Eventually there could be so much of it that it would become an accepted norm that your research isn’t actually published in a journal until five years after you informally publish it. Other scientists in the field have to be abreast of the latest findings, so now these informal publications are the true journals.

I see your point, the success of this would have to align with a change in the broader academia to only cite research from insured researchers.

The “organic” way this would happen is if there was a shift so that journals with insured research are far more valuable than uninsured research. Or perhaps if companies started suing researchers for negligence and fraud and recuperate costs if they used research that was later proved to be fraud.

In the literary world, anyone can publish a book, but a book from o’reilly caries with it a different level of authority and diligence then a self published book or blog post.

So the shift would have to be that your career can’t advance without publishing a bonded and insured paper.

But that is not how research works in Academia. They have to follow the bleeding edge of the field, or they may be doing work of their own that is already irrelevant. They will not wait until a consortium of insurance companies and underwriters have done the actuarial analysis and come up with an underwriting product that the institution has funded (and what is the institution’s business model for recovering this cost in a field of pure research, anyway?)

> you are suggesting more stringent peer review conducted by insurance companies

Absolutely not. Underwriters are smart. They use other variables and methods for determining risk. They don’t need to directly recreate and peer review the research themselves.

I was thinking about it: If I come across someone seriously injured, try to help them, and accidentally hurt them, I’m protected (in many places) by Good Samaritan laws.

But if a health care professional does the same thing, and does something negligent, then they are usually liable. They are professionals and are held to a different standard. Similarly, that’s why lawyers keep writing: this is not legal advice and you are not my client.

Perhaps a professional in science should have higher standards. Obviously they shouldn’t be sued for being wrong – that would destroy science, disregard the scientific method’s means to address inaccuracy, and go against science’s nature as the means to develop new knowledge. But intentionally deceiving people perhaps should be illegal and/or create liability: When you publish something, people depend on its fundamental honesty and will act on it.

The US has the Office for Research Integrity which can prosecute scientific fraud cases, but it only does a handful of cases per year.

To put the scale of this problem in perspective, the ORI was set up in the 1970s after Congress became concerned at widespread reports of scientific fraud. It clearly didn’t work, but hangs around regardless.

It’s ultimately a culture problem. Until academics have the same level of respect as ordinary corporate employees, you’re going to get judges and juries who let them off scott free.

Here’s a deterrent:

1) revoke all of their academic accreditations and degrees

2) put them on a public “do not publish” list permanently banning them from being named on any paper in a journal

The line between outright fraud, bad methods correctly implemented, messy data, and implementation bugs is fuzzy. Trying to criminalize anything not very very clearly #1 quickly turns into a case of “show me the man and I’ll show you the crime”. You think groupthink in academia is bad just wait until professional disputes lead to jail time for the loser.

Everyone seems to acknowledge this is a problem, but refuse to believe it actually affects anything when it comes time to “trust the science”. Yes, science is corrupted, but all the results can be trusted, and the correct answer is always reached in the end. So, is it really a problem? Or not?

A key skill for any scientist is to differentiate quality work and science that can be easily faked.

The Alzheimer’s and Parkinson’s fields are too easy to fake, and too difficult to replicate. The new ideas are only ~20 years old. Big pharma companies are understandably wary of published papers.

When people say “trust the science”, they often refer to things like masks, and antibiotics, and vaccines. That science is hundreds of years old and have been replicated thousands of times.

TL;DR: Some science should absolutely be trusted, some shouldn’t. It’s not surprising that you can’t make blanket statements on a superfield ranging from germ theory to cold fusion.

> When people say “trust the science”, they often refer to things like masks, and antibiotics, and vaccines. That science is hundreds of years old and have been replicated thousands of times.

When people say “trust the science” they’re usually referring to fairly recent developments. Covid vaccines were in development and testing for just over 18 months before being mandated and were certainly not replicated on a large scale by disinterested 3rd parties before being mandated. The idea that we can have effective scientific policy without trust in scientific institutions is just… not accurate.

Exactly. Nobody needs to be told to “trust the science” on gravity and electricity, nobody asks to consult scientific consensus. The argument only arises for the more suspicious niches.

It’s a matter of how established the science actually is.

Questioning novel science is one thing but questioning if the Earth is flat or Germ Theory is another thing all together. The problem with skeptics is that they sometimes hang around conspiracists.

It’s hard to not discount these people when the person next to them thinks black people are biologically inferior. Then when those skeptics don’t distance themselves or don’t explicitly condemn those bad actors, it brings to question if their positions are born of skepticism or some strange prejudice, and that they merely constructed the cover of skepticism to hide their strange prejudices.

For example, during the Covid pandemic there was a lot of questioning around masks. In hindsight, the answer is obvious: it doesn’t really matter if masks were or were not effective, because they’re essentially free to wear. Even in the worst case, nobody is actually hurt.

But there were many, maybe millions, of mask deniers who would simply refuse to wear them. They were doing this because of institutional distrust and political motivations, not because they truly believed the masks were dangerous. And this is the trouble: these people are skeptics, but they’re skeptics with an end-goal of political destabilization, i.e. they’re dangerous.

When you mix it all together, which people often do to themselves, it’s discredits the very thought process.

Another example of the phenomenon where people can realize something when considering it from an abstract perspective but not at a realtime object level is psychological bias and imperfect rationality. If the topic of discussion is an article about bias, rare is the person who will deny the phenomenon, and many enthusiastically admit to suffering from the problem themselves. But if the topic of discussion is something else and one was to suggest the phenomenon may be in play: opposite reaction. During realtime cognition, that knowledge is inaccessible.

I honestly think if some serious attention was paid to this and various other real world paradoxes around us, we could actually make some forward progress on these problems for a change.

Only fix I can see is making scientific fraud criminal. But it has to be straight fraud and not just bad science.

I can’t imagine any other vocation where you can take public and private money, then cheat the stakeholders into thinking they got what they payed for, only to just walk away from it all when you are found out. Picture a contractor claiming to have build a high-rise for a developer, doctored photos of it, and then just go oops moneys all gone with no consequences when the empty lot is discovered years later.

There are unfortunately very rarely consequences for academic fraud. It’s not just that we only catch a small fraction — mostly the most brazen image manipulation — but these cases of blatant fraud happen again and again, to resounding silence.

Ever so rarely, there may be an opaque, internal investigation. Mostly, it seems that academia has a desire to not make any waves, keep up appearances, and let the problem quiet down on its own.

And occasionally a grad student who discovers academic dishonesty, and complains internally (naively trusting administrators to have humility and integrity), has their career ended.

I suppose a silver lining to all the academic fraud exposés of the last few years is that more grad students and faculty now know that this is a thing, and one that many will try to cover up, so trust no one.

Another silver lining might be that fellow faculty are more likely to believe an accusation, and (if they are one of the awful people) less likely to think they can save funding/embarrassment/friend by neutralizing the witness.

(ProTip: If the success of your dishonesty-reporting approach is predicated on an internal administrator having humility and integrity, realize that those qualities are the opposite of what has advanced a lot of academic careers.)

The people doing the investigation have a vested interest in keeping it quiet.

It’s like the old quote… “If you commit fraud as an RA that’s your problem. If you commit fraud as the head of department that’s the university’s problem.”

It seems like a strange thing to take someone with a long and respected career and subject them to what would essentially be a Western blot and photomicrograph audit before offering them a big position

I really feel stupid asking experienced developers to do FizzBuzz. Not one has ever failed. But I have heard tons of anecdotes of utterly incompetent developers being weeded out by it.

I’m not a researcher or academic, but when I think of roughly how long it takes me to do meaningful deep work and produce a project of any significance, I’m struck by the fact that his 800 papers isn’t a red flag? Even if you allocate ~3 months per paper, that’s over 200 years of work. Is it common for academics to produce research papers in a matter of days?

From the article:
Masliah appeared an ideal selection. The physician and neuropathologist conducted research at the University of California San Diego (UCSD) for decades, and his drive, curiosity, and productivity propelled him into the top ranks of scholars on Alzheimer’s and Parkinson’s disease. His roughly 800 research papers, many on how those conditions damage synapses, the junctions between neurons, have made him one of the most cited scientists in his field.

It’s kind of like, when reporters say a CEO built (insert ridiculously complex product here), ex: ascribing the success of OpenAI to Sam Altman, or Apple to Steve Jobs. Sure there were important in setting the direction, and allocating resources but they didn’t actually do the work.

Similarly, the heads of famous science labs have lots of talented scientists who want to work with them. The involvement of a lab director varies wildly, but for the hyper productive, famous ones, it’s largely the director curating great people, providing scientific advice, and setting a general research direction. The lab director gets named on all these papers that get generated from this process.

So 800 papers isn’t necessarily a red flag if the director is great at fundraising and has lots of graduate students/post docs doing the heavy lifting.

Similar to

> Founder, CEO, and chief engineer of SpaceX.
CEO and product architect of Tesla, Inc.
Owner, CTO and Executive Chairman of X (formerly Twitter).
Founder of The Boring Company, X Corp., and xAI.
Co-founder of Neuralink, OpenAI, Zip2, and X.com (part of PayPal)

It can only be a fraud.

Depends on your definition of fraud. Musk is obviously not chief engineer of SpaceX while actively working at Twitter, Tesla, and Neuralink. The founding claims aren’t that unbelievable though, founding 10 companies in 30 years isn’t that hard.
I would call it heavy exaggeration.

Among other things my physics career taught me: anyone who is listed as an author on more than 200 papers is almost definitely a plagiarist, in the sense of a manager who adds his or her name to the papers of the underlings in his or her lab. When I was still bothering to go to conferences I would sometimes have fun with them (the male variety is easy to spot: look for the necktie) by asking detailed questions about the methodology of the research. They never have any idea how the work was actually done.

I’ve said so many times, but we need to go back to a system where it is possible to make a career in science and get funding for replicating other people’s work to verify the results.

This leads to a tragedy of the commons. Say a random nation, say, Sweden, devotes 100% of their governmental and university research budgets toward replication.

70% of the studies they attempt are successfully replicated.
20% are inconclusive or equivocal.
10% are clearly debunked.

Now the world is richer, but Sweden? No return on investment for the Swedes, other than perhaps a little advanced notice on what hot new technologies their sovereign funds and investors ought not to invest in.

A bloc of nations, say NAFTA/CAFTA-DR, or the European Union, might be more practical.

That’s the carrot. As for the stick, bad lawyers can get disbarred, bad doctors can get “unboarded”. Some similar sort of international funding ban/blacklist for bad researchers would be useful.

I applaud that approach. The first year of a Ph.D. program could be reformulated to become 75% replicating the research of others, preferably that of unaffiliated research organizations.

A lot of this research is very involved and esoteric, requiring specialized equipment found only in one place, so some would be very hard to replicate. If what Theranos was doing (or claiming to do) was easy to replicate, it would’ve imploded years prior to when it did. So not all fraud could be detected, but a lot of the low-hanging fraud, especially in the psychological and pharmacological fields, could be quickly identified. Such a system would be a substantial upgrade and I applaud your suggestion. A smaller country could blaze the trail, because “big boys”, like the U.S., are too set in their ways.

Wouldn’t science in total be impossible to fund if this argument were true? What advantage does Sweden have from doing science and publishing if everyone else gets to use it and they could just wait for someone else to do it? If this was how it worked, wouldn’t every scientist work in secret and never publish anything?

When I was in my doctoral program I had some pretty promising early results applying network analysis to metabolic networks. My lab boss/PI was happy to advertise my work and scheduled a cross-departmental talk to present my research in front of ~100 professors or so. While I was making a last-minute slide for my presentation I realized one chart looked a little off and I started looking into the raw data. I soon realized that I had a bug in my code that invalidated the last 12 months of calculations run on our HPC cluster. My conclusions were flat out wrong and there was nothing to salvage from the data. I went to my lab boss the night before the talk and told him to cancel it and he just told me to lie and present it anyways. I didn’t think that was moral or scientifically sound and I refused. It permanently damaged my professional relationship with him.

No one else I talked to seemed particularly concerned about this, and I realized that a lot of people around me were bowing to pressure to fudge results here and there to keep up the cycle of publicity, results, and funding that the entire academic enterprise relied upon. It broke a lot of the faith I had been carrying in science as an institution, at least as far as it is practiced in major American research universities.

Coding errors are a really common source of fraud unfortunately. You did the right thing but the vast majority don’t. Given a choice between admitting the grant money was wasted, the exciting finding isn’t real, everyone who cited your work should retract their papers or just covering it up, the pressure to do the latter is enormous.

During COVID I talked to a guy who used to do computational epidemiology. He came to me because I’d written about the fraud that’s endemic in that field and wanted to get stuff off his chest. He was a research programmer, assisting scientists. One of the stories he told involved checking the code for a model written in FORTRAN. He discovered it was mis-using an FFI and using pointer values in equations instead of the dereferenced values. Everything the program had ever calculated was garbage. He checked and it had been used in hundreds of papers. After emailing the authors with a bug report, he got a reply half an hour later saying the papers had been checked and the results didn’t change so nothing needed to be done.

Little known fact: the COVID model that drove lockdowns in the UK and USA was nothing but bugs. None of the numbers it produced can be replicated. When people pointed out that this was a problem academics went on the attack, claimed none of the criticism was legitimate because it didn’t come from experts, and of course simp journalists went along with all of it. They got away with it completely and will probably do it again in future. Even in this thread you can see people defending COVID science as the good stuff! It was all riddled with fraud.

Part of the issue is that scientists are mostly self taught coders who aren’t interested in coding. They frequently use a standard of: if it looks right, it is right. The whole thing becomes a circular exercise in reinforcing their priors.

I’m surprised that people are surprised by science being done in non-scientific ways.

I got a taste of this in my high school honors biology class. I decided to do a survey of redwing blackbirds in my town. I had a great time, there was a cemetery across the street from my house with a big pond, where 6-8 males hung out. I was excited when later in the season several females also arrived and took up residence.

I eagerly wrote up my results in a paper. I thought I did “A” level work but was distressed when the teacher gave me B- or C+. She said “My husband and I are birdwatchers who have published papers on redwing mating habits in the area, and we haven’t seen any females this year. Neither did one of your classmates who watched redwings in her neighborhood.” While she did not directly in writing accuse me of fraud, she strongly implied it.

I told her to grab her binoculars and hang out at the cemetery one morning. She declined, as she was a published authority and didn’t need to actually observe with her own eyes. IIRC I had photos but they were from faraway with a Kodak Instamatic (this was the mid-’80s), so she didn’t accept those as evidence.

I often wonder if my life would have gone in a different direction if I had a science teacher who actually followed the scientific method of direct observation! It didn’t come easy to me, but I was very interested in science before this showed me clearly that science is just another human endeavor, replete with bias, ego, horseshit, perverse incentives, and gatekeeping.

Scale this experience out to tens of thousands of young people. These kinds of people should not be teaching! A good teacher is capable of fearlessly admitting to a room of children that they were wrong and the students were right, or better yet that they have no idea what the answer is!

We have done a great disservice to human intellect to have mistaken the gift that empiricism gives of predicting the world, with knowledge of the world itself of which we possess almost nil.

In the future those who commit fraud are not likely leave trace in Western blot and photomicrograph audit.

When the experiments are significant, double blind is not enough. You need external auditors when conducting experiments. Preferably separate team making experiments from those who design them.

On the plus side, this is the kind of stuff you could screen pretty easily with large model machine learning. Not that there is a business in identifying scientific fraud, doing that with fraudulent government documents would probably have a better ROI (at least for the tax payer), but clearly we need a repository if every image/graph that has been published as evidence to start.

It would be something you could offer to journals perhaps as a business. Sort of “peer reviewed and fraud analyzed” kinda service.

What is truly sad for me is the ‘wrong paths’ many hard working and well meaning scientists get deflected down while someone cheats to get more ‘impact’ points.

Is it time for periodic AI-driven audits of papers. Some types of audits may be easy—Western blots for example. But many edge cases will require lots of sleuthing or preferably open access to all files and data. Obviously paying for your own audit sets up the incentives the wrong way.

Alzheimer’s research has been a mess for 30 years as Karl Herrup argues persuasively in How Not to Study a Disease:

Book Details

> There’s also a proposed Alzheimer’s therapy called cerebrolysin, a peptide mixture derived from porcine brain tissue. An Austrian company (Ever) has run some small inconclusive trials on it in human patients and distributes it to Russia and other countries (it’s not approved in the US or the EU). But the eight Masliah papers that make the case for its therapeutic effects are all full of doctored images, too.

> cerebrolysin

This was discussed here recently: https://news.ycombinator.com/item?id=41239161

> But if the NIH had done that in 2016, they wouldn’t be in the position they’re in now, would they? How many people do we need to check? How many figures do we have to scrutinize?

All of them

For all the complaints about AI generated content showing up in scientific journals, I’m exited for the flip side, where an LLM can review massive quantities of scientific publications for inaccuracies/fraud.

Ex: Finding when the exact same image appears in multiple publications, but with different captions/conclusions.

The evidence in this case came from one individual willing to volunteer hundreds of hours producing a side by side of all the reports. But clearly that doesn’t scale.

I’m hoping it won’t have the same results as AI Detectors for schoolwork, which have marked many legitimate papers as fraud, ruining several students’ lives in the process. One even marked the U.S. Constitution as written by AI (1).

It’s fraud all the way down, where even the fraud detectors are fraudulent. Similar story to the anti-malware industry, where software bugs in security software like CrowdStrike, Sophos, or Norton cause more damage than the threats they prevent against.

(1) https://www.reddit.com/r/ChatGPT/comments/11ha4qo/gptzero_an…

> For all the complaints about AI generated content showing up in scientific journals, I’m exited for the flip side, where an LLM can review massive quantities of scientific publications for inaccuracies/fraud.

How would this work? AI can’t even detect AI generated content reliably.

Not in a zero shot approach. But LLMs are more than capable of solving a similar scenario to the one presented:

– Parse all papers you want to audit

– Extract images (non AI)

– Diff images (non AI)

– Pull captions / related text near each image (LLM)

– For each image > 99% similarity, use LLM to classify if conclusions are different (i.e. highly_similar, similar, highly_dissimilar).

Then aggregate the results. It wouldn’t prove fraud, but could definitely highlight areas for review. i.e. “This chart was used in 5 different papers with dissimilar conclusions”

Wouldn’t it be cool if people got credit for reproducing other people’s work instead of only novel things. It’s like having someone on your team that loves maintaining but not feature building.

LLMs might find some specific indications of possible fraud, but then fraudsters would just learn to avoid those. LLMs won’t be able to detect when a study or experiment isn’t reproducible.

I don’t know why this would be surprising. There’s nothing more obvious than the fact that research is riddled with both fraud and laughably shoddy work.

I guess we need to find a way to incentivize good practice rather than interesting results? Turns out that science is so hard that people cheat.

I can’t manage to be really surprised. We already know many people will cheat when the incentives are right. And when the law of the land is “publish or perish”, then some will publish by any means necessary. Thinking “this subsegment of society is so honorable, they won’t cheat” would be incredibly naive.

I am thinking if some type bounty program which would take sufficient proof on fraud would work. Sadly I don’t think anyone will fund it. And those participating likely won’t be taken well in the circles…

Does anyone know of an up-to-date or live visualization of the amount of scientific fraud? And perhaps also measuring the second order effects? i.e. poisoning of the well via citations to the fraudulent papers.

It’s hard to tell at this point if it’s just selection bias or if the scientific fraud problem has outgrown the scope of self-correction.

I’m not a scientist because of fraud and other reason related to academia, but I thought one of the tennets of an experiment was reproducibility. Were his experiments reproduced independently? Why not?

> But at the same time I have a lot of sympathy for the honest scientists who have worked with Masliah over the years and who now have to deal with this explosion of mud over parts of their records.

This really is quite unfortunate.

Is it? Is it possible that none of them knew? Should they have responsibilities to go with the benefits of putting their names on major discoveries?

Ah, I see how you could misunderstand. In context, this sentence was contrasting between those who knew, and those who didn’t know about the fraud. To make my point more clear:

It’s impossible that none of them knew.

It’s possible that some of them didn’t know.

Why we would expect academia to be different from anything else these days? Fraud is how you get ahead. It is how you gain competitive advantage. When everyone is cheating, the only way to win is to cheat smarter. Fraud is the end result of the dreams that motivate people to be better than they. are.

Intrinsic to the article is, arguably, a significant cause of fraud in this field: The article talks about fraud as if it’s done by the ‘other’ – by someone else, other than the article’s author (or their audience).

The solution starts when you say, ‘we committed fraud – our field, our publication, the scientific enterprise. What are we going to do?’

Does the author really have no idea about these things? That they occur?

Reminder that these people are only caught because they photoshopped Western blots.

Even more widespread is when PIs just throw out data that don’t agree with their hypothesis, and make you do it again until the numbers start making sense.

It’s atrocious, but so common that if you’re not doing this, you’re considered dumb or weak and not going to make it.

Many PIs end up mentally justify this kind of behavior (need to publish / grant deadline / whatever) — even at the protest of most of the lab members.

Those who refuse to re-roll their results — those who want to be on the right side of science — get fired and black balled from the field.

And this is at the big famous universities you’ve all heard of

As a scientist who has published in the neuroscience space, I don’t what to say other than the incentives in academia are all messed up. Back in the late 90s, NIH made a big push on ‘translational research”, that is, researchers were strongly encouraged to demonstrate their research had immediate, real world benefits or applications. Basic research and the careful, plodding research needed to nail down and really answer a narrow question was discouraged as academic navel-gazing.

On one hand, it seems the push for immediate real world relevance is a good thing. We fund research in order that society will benefit, correct? On the other hand, since publications and ultimately funding decisions are based on demonstrating real world relevance, it’s little surprise scientists are now highly incentivized to hype their research, p-hack their results, or in rare cases, commit outright fraud in an attempt to demonstrate this relevance.

Doing research that has immediate translational benefits is a tall order. As a scientist you might accomplish this feat a few times in your career if you’re lucky. The rest of the corpus of your work should consist of the careful, mundane research the actual translational research will be based upon. Unfortunately it’s hard to get that foundational, basic, research published and funded nowadays, hence the messed-up incentives.

There’s evidence that the turning point was in the 90s but I suspect the real underlying problem is indirect funds as a revenue stream for universities, combined with the imposition of a for-profit business model expectation from politicians at the state and other levels. The expectation changed from “we fund universities to teach and do research” to “universities should generate their own income”, which isn’t really possible with research, so federal funding filled the gap. This lead to the indirect fund firehose of cash, pyramid scheme labs, and so forth and so on. It sort of became a feedback loop, and now we are where we are today.

Translational research is probably part of it but I think it’s part of a broader hype and fad machine tied to medicine, which has its own problems related to rent-seeking, regulatory capture, and monopolies, among other things. It’s one giant behemoth of corruption fed by systemic malstructurings, like a biomedical-academic complex of problematic intertwined feedback loops.

I say this as someone whose entire career has very much been part of all of it at some level.

Good points, thanks. As I’m sure you’re aware, the indirect rates at some universities are above 90%. That is, for every dollar that directly supports the research, almost another dollar goes to the university for overhead. Much of this overhead is legitimate: facilities and equipment expenses, safety training, etc… but I suspect a decent portion of it goes to administrative bloat, just as much as the education-only part of the university has greatly increased administrative bloat over the last 30-40 years.

Another commentator made a separate point about how professors don’t always get paid a lot, but they make it up in reputation. Ego is a huge motivator for many people, especially academics in my observation. Hubris plays no small part in the hype machine surrounding too many labs.

Perhaps the root of all evil is “publish or perish”. I am long out of research, working at a teaching college, and yet I am still expected to publish. Idiocy.

Academic fraud is also enabled by lack of replication. No one gets published by replicating someone else’s work. If one could incentivize quality replication, that could help.

Why worry about fraud, deception and misleadings using AI when we have the old kind of fraud?

Or, in the other hand, now you don’t have to manipulate images, you can just generate the ones you need.

I would rather die than deliberately cause a humongous speed bump in the history of human understanding of the universe like this guy did. And the choice is never that stark. It’s usually “id rather work in a less highly paid role”.

To selfishly discard the collective attention of scientific experts for undue gain is despicable and should disqualify a person from all professional status indefinitely in addition to any legal charges.

I deeply respect anyone whose desires align with winning the collective game of understanding that science should be. I respect even more those folks who speak up when their colleagues or even friends seek to hack academia like this guy did.

But if the NIH had done that in 2016, they wouldn’t be in the position they’re in now, would they? How many people do we need to check? How many figures do we have to scrutinize? What a mess.

This is the core problem with science today. Everyone is trying desperately to publish as much, and as fast, as they can. Quantity over quality. That quantity dictates jobs, fellowships, grants, and careers. Dare I saw we have a “doping” problem in science and not enough controls. Especially when it comes to “some” countries feverish output of papers that have little to no scientific value, cannot be replicated, full of errors, but at least it’s published and they can get a job.

For a long time the numbers have been manipulated and continue to be so, seemingly due to national pride.

https://en.wikipedia.org/wiki/List_of_countries_by_number_of…

https://www.science.org/content/article/china-rises-first-pl…

Scholars disagree about the best methodology for measuring publications’ impact, however, and other metrics suggest the United States is still ahead—but barely.

While I agree this is a big problem, science should never be defined by a single article.

I was always taught that science is a tree of knowledge where you build off previous positive results, all of which collapse when an ancestor turns out to be false.

I see this as a pruning process and an inevitable part of science.

But I would further argue against what others were saying about personal ethics. Science must remove the human as much as possible from the process.

This stuff just ENRAGES me.

With that off my moobs … for those interested in the broader topic, I highly recommend Science Fictions, by Stuart Ritchie. The audiobook is also excellent.

I’m not a working scientist, and I found it completely engaging. Worth it just for the explanation of p-hacking.

I hate the thought that researchers and drug developers may have wasted their effort and dollars developing drugs based on one extremely selfish person’s bogus results.

Not clear whether it would be a net benefit, adding constraints and complexity to the scientific process which will be skipped whenever possible by underpaid labrats. Also, GIGO.

Need to tackle the incentives directly.

Anecdotally, during my (fairly short-lived) academic career, in which I did research with three different groups, 2/3 of them were engaging in fraudulent research practices. Unfortunately the one solid researcher I worked for was in a field I wasn’t all that interested in continuing in, and as a naive young person who believed in the myth of academic freedom and didn’t really understand the funding issue, I jumped ship to another field, and found myself in a cesspool of data manipulation, inflated claims, and all manner of dishonest skullduggery.

It all comes down to lab notebooks and data policies. If there is no system for archiving detailed records of experimental work, if data is recorded with pencils so it can later be erased and changed, if the PI isn’t in the habit of regularly auditing the world of grad students and postdocs with an eye on rigor and reproduciblity, then you should turn around and walk out the door immediately.

As to why this situation has arisen, I think the corporatization of American academics is at fault. If a biomedical researcher can float a false claim for a few years, they can spin their research off to a startup and then sell that startup to a big pharmaceutical conglomerate. If it fails to pan out in further clinical trials, well, that’s life. Cooking the data to make it look attractive to an investor – in the almost completely unregulated academic environment – is a game that many bright-eyed eager beavers are currently playing.

As supporting evidence, look at mathematical and astronomical research, the most fraud-free areas of academics. There’s no money to be made in studying things like galactic collisions or exoplanets, the data is all in the public domain (eventually), and with mathematics, you can’t really cook up fraudulent proofs that will stand the test of time.

I imagine how common fraud is has more to do with the relative number of researchers in a field and the chance of getting caught.

Sure money could be a factor, but the desire for prestige can motivate people just as easily.

> …(bio people make money by) spin their research off to a startup… …mathematical and astronomical research (is) fraud-free…

You are talking about a part of the academy that relative to medicine, so few people do.

Show up to a bank looking like someone who knows math, and they’ll cut you a huge check. Is that not fraud?

> mathematical and astronomical research, the most fraud-free areas of academics. There’s no money to be made

So we’re systemically safeguarding the quality of astronomy research, by setting up a gradient (at MIT: restaurant catering for business talks, pizza for CS, stale cookies for astronomy) to draw off some flavors of participants and thus concentrate others?

Glad the title here is “Fraud, so much fraud” and not “Research misconduct”. I hope that Masliah is charged with federal wire fraud.

In cases like this where the fraud is so blatant and solely done for the purposes of aggrandizing Masliah’s reputation (and getting more money), and where it caused real harm, we need to treat these as the serious crimes that they are.

I’m a recovering academic, and have not published since not long after defending my dissertation.

I blame this behavior entirely on “publish or perish”. The demands for novel, thoughtful and statistically-significant findings is tremendous in academe, and this is the result: cheating.

I left professional academia because I resented the grind, and the push to publish ANYTHING (even reframing and recombining the same data umpteen times in different publications) in an effort to earn grants or attain tenure.

The academia system is broken, and it cannot be repaired with minor edits, in my opinion. This is a tear out and do over scenario for the academic culture, I’m afraid.

Just a lark, not to be taken too seriously:

I wonder if a market-driven approach could work here, where hedge funds hire external labs to attempt to reproduce the research underlying new pharmaceutical companies or trials and then short the companies whose results they can’t replicate before results get reported.

This is terribly terribly frustrating. For every one of these cheats there are hundreds of honest, extremely hard-working ETHICAL scientists who toil 60 hours a week doing the thing they love. It is also terribly frustrating that, being human after all, smooth talkers with a confident stride, an easy smile, eager to shake hands can and do quickly climb the academic ladder, especially the administrative latter.
This makes me terribly sad.

> For every one of these cheats there are hundreds of honest, extremely hard-working ETHICAL scientists

Every one of these /discovered/ cheats.

Remember this particular cheat was one of your ethicals until a few moments ago.

Once, at 3Com, Bob Metcalfe introduced a talk by one of his MIT professors with the little joke, “The reason academic politics is so vicious is that nothing’s at stake.”

The guy said, “That depends on whether you consider reputation ‘nothing.’ “

I guess what that shows is, you can always negotiate and compromise over money, but reputation is more of a binary. An academic can fake some work, and as long as he’s never called on it, his reputation is set.

So yeah, a little more fear of having one’s reputation ruined would go a long way towards fixing science.

> reputation

A caveat that “reputation”, like competence, is more variagated and localized than is often appreciated. As with someone who is highly competent and well regarded in their own subfield, while simultaneously rather nutter about some nearby subfield where they don’t actually work.

One can have a reputation like “good, but beware they have a thing for <mechanism X>”. Or “ignore their results using <technique> – they see what they want to see”. Subtext that gets passed among professors chatting at conferences, and to some extent to their students, but otherwise isn’t highly accessible.

When people speak of creating research AI’s using just papers… that’s missing seemingly important backchannels. And corresponding with authors. Attempting research AI as developing-world professionally-isolated professor.

But this is really a societal/political issue: since we decided that economic capital is king and symbolic capital not that much… (This is really the story of the last four decades or so.)

>But this is really a societal/political issue

Bang on.

Not many people in the academic/technical people realize this, often for their entire lives. In their naive worldview, they cannot even imagine that people can stoop that low.

(embarrassingly and shamefully I used to be one of those naive people)

The problem being, we have “economized” academia, by things like “publish or perish”, a citation pseudo stock market or third party funding, and all incentives are built around this pseudo-economy. Which also imports all the common incentives found in economy…

Well, this is about Pierre Bourdieu, and he had a few things to say about academia, as in Homo Academicus.

And I’m not sure what example could illustrate the problem with the lopsided valuation of economic capital and the general devaluation of symbolic capital (as compared to pre-1980s, we have since undergone a social revolution of considerable dimensions, which is also why there isn’t an easy fix) better than this one.

Socio-economic issues aren’t one-dimensional, in fact they’re very complex. Most of our systems and beliefs are socially constructed.

Humans are, by our biology, social creatures. Modern humanity more than ever before. If you’re not considering the social effects, then IMO you’re not addressing anything of value.

I have always said that while professors get paid less money than in industry, they are compensated in reputation to make up for it. Status and reputation are the currency of academia.

> and others appear to be running for cover.

In every industry right now there appear to be a lot of people running cover. I have a personal belief, with the exception of a few industries, 50% of managers are simply running cover. This is easy to explain:

1/ Nothing follows people

2/ Jobs were easy to get in the last 3 years (this is changing FAST)

3/ Rinse and repeat and stay low until you’re caught.

If you’re an academic and want to use the fastest publishing stack ever created that also helps guide you to building the most honest, true thing you could create, I have built Scroll and ScrollHub specifically for you.

https://hub.scroll.pub/?template=paper

Happy to provide personal help onboarding those who want to use this to publish their scientific work. [email protected]

If you are familiar with academia you’ll realize the academic dishonesty policy is essentially the playbook by which academics behave. The author is surprised that Eliezer Masliah purportedly had instances of fraud spanning 25 years. I bet the author would be even more surprised to find out that most academics are like that for the entire duration of their career. My favorite instance is Shing-Tung Yau, who is still a Harvard professor, who attempted to steal Grigori Perelman’s proof of Poincare’s conjecture (a millenium prize problem https://www.claymath.org/millennium-problems/> that comes with a $1MM prize and $10k/mo for the rest of one’s life; Perelman rejected all of it.)

I mean, get this: an extremely gifted Mathematician living on a measly salary in Russia had had his millenium prize almost stolen by a Harvard professor. What more evidence do you need?

From personal experience, it is all I’ve seen. Could anyone be in a position to extrapolate to all of academia without speaking from personal experience? I’m not speaking of all academics (hence ‘most’). It’s a statement similar to “Hollywood has a drug problem” or something of that sort.

My advice to anyone going into Hollywood would be to stay away from drugs; my advice to anyone going into academia is to treat every interaction as if you’ve just sat at a poker table in Las Vegas.

I work in Hollywood. I am not sure it has more of a drug problem more than say tech or finance. Maybe it does– I don’t know. The point is when a celebrity is a drug addict you hear about it. When a banker or a lawyer is you don’t.

Our experience of things has a lot of bias toward what we want to hear. Generalization plays into sterotypes and ideology.

I believe that tech and finance also have a drug problem. Those that sell expensive drugs like cocaine go after rich clients. You work in Hollywood, but have you been attending wild private parties? I’ve worked in academia and I was in the thick of it, I’ve experienced first hand the fraud I’m talking about, and it was a large part of my experience, not some side note. Perhaps it’s an uncomfortable truth that academia is in the state it is in, but again, it is of utmost importance to warn younger people to its perils. (Act as if you’re at a poker table at all times.) In any case, how do you know that it isn’t your biases that prevent you from considering what I describe? What is so surprising with the claim that people who are very incentivized to steal and commit fraud do so if they are not punished for it?

edit: and it’s not things I’ve heard, instead it is direct experiences, i.e. people stole my work, and things like that. As a graduate student to watch professors come to you with problem X, take what you’ve said (an actual solution) and publish a paper without attribute, that sort of thing; to report it and have nothing be done about it, et cetera, and on it goes, it’s just instance after instance of such behavior, or the million ways in which they are careful to trick you into working on their problems without receiving attribute. One such trick for example, that again happened to me, is that after a conference talk I got into an e-mail discussion where I explained my approach; I was told that “they already have these results” (the trick here was to divulge less in the talk than what was currently known in order to be able to avoid “significant progress by another person” in the case another person does share new progress that they have already established, and hence not having to share attribution.) It turned out that our discussion was enough for them to go from n=3,4 to a general formula involving primes, because I pointed out a certain property they had not noticed. This is just a single example of the sorts of tricks, aside from total fraud, that happen, and one of the milder incidents I had happen to me.

> you’ll realize the academic dishonesty policy is essentially the playbook by which academics behave

If you are unable to “extrapolate to all of academia” then I suggest you be more selective in your statements.

I extrapolate to all of academia, but not to all academics (persons working in academia). My methodology is based on my intuition and my experiences. Already in this YC article the comments appear to be akin to the first meeting between battered housewives. You don’t have to believe me or others, I’m just issuing a warning to anyone thinking of getting into academia: be alarmed and alert, and always careful. It’s nothing like the movies portray academia to be, instead it’s a thieves den, or a poker table, etc, you get the point.

From “The Big Crunch” by David Goodstein” (1994)
https://www.its.caltech.edu/~dg/crunch_art.html

    "The crises that face science are not limited to jobs and research funds. Those are bad enough, but they are just the beginning. Under stress from those problems, other parts of the scientific enterprise have started showing signs of distress. One of the most essential is the matter of honesty and ethical behavior among scientists.
    The public and the scientific community have both been shocked in recent years by an increasing number of cases of fraud committed by scientists. There is little doubt that the perpetrators in these cases felt themselves under intense pressure to compete for scarce resources, even by cheating if necessary. As the pressure increases, this kind of dishonesty is almost sure to become more common.
    Other kinds of dishonesty will also become more common. For example, peer review, one of the crucial pillars of the whole edifice, is in critical danger. Peer review is used by scientific journals to decide what papers to publish, and by granting agencies such as the National Science Foundation to decide what research to support. Journals in most cases, and agencies in some cases operate by sending manuscripts or research proposals to referees who are recognized experts on the scientific issues in question, and whose identity will not be revealed to the authors of the papers or proposals. Obviously, good decisions on what research should be supported and what results should be published are crucial to the proper functioning of science.
    Peer review is usually quite a good way to identify valid science. Of course, a referee will occasionally fail to appreciate a truly visionary or revolutionary idea, but by and large, peer review works pretty well so long as scientific validity is the only issue at stake. However, it is not at all suited to arbitrate an intense competition for research funds or for editorial space in prestigious journals. There are many reasons for this, not the least being the fact that the referees have an obvious conflict of interest, since they are themselves competitors for the same resources. This point seems to be another one of those relativistic anomalies, obvious to any outside observer, but invisible to those of us who are falling into the black hole. It would take impossibly high ethical standards for referees to avoid taking advantage of their privileged anonymity to advance their own interests, but as time goes on, more and more referees have their ethical standards eroded as a consequence of having themselves been victimized by unfair reviews when they were authors. Peer review is thus one among many examples of practices that were well suited to the time of exponential expansion, but will become increasingly dysfunctional in the difficult future we face.
    We must find a radically different social structure to organize research and education in science after The Big Crunch. That is not meant to be an exhortation. It is meant simply to be a statement of a fact known to be true with mathematical certainty, if science is to survive at all. The new structure will come about by evolution rather than design, because, for one thing, neither I nor anyone else has the faintest idea of what it will turn out to be, and for another, even if we did know where we are going to end up, we scientists have never been very good at guiding our own destiny. Only this much is sure: the era of exponential expansion will be replaced by an era of constraint. Because it will be unplanned, the transition is likely to be messy and painful for the participants. In fact, as we have seen, it already is. ..."

If there’s this much overt, deliberate fraud and dishonesty in all of our research institutions, the quantities of soft lying and fudging are inconceivable.

We need to seriously rethink our approaching to stewarding these institutions and ideas, public trust is rightfully plummeting.

I’ve been saying this for years and have been punished for that. Even here.

I’ve done Biology and CS for almost 20 years now, I’ve worked at four of the top ten research institutions in the world. The ratio of honest to bullshit academics is alarmingly low.

Most of these people should be in jail. Not only do they commit academic fraud, many of them commit other types of crimes as well. When I was a PhD student, my 4 year old daughter was kidnapped by staff at KAUST. Mental and physical abuse is quite common and somewhat “accepted” in these institutions. Sexual harassment and sexual abuse is through the roof.

I am very glad that, slowly, these things are starting to vent out. This is one real swamp that needs to be drained.

Some smartass could come up and say “where is your evidence for this?”. This is what allows this abhorrent behavior to thrive. Do you think these people are not smart enough to commit these crimes in covert ways? The reason why they do it is because they know no one will find out and they will get away with it.

What’s the solution? I’ve thought about this a lot, a lot. I think a combination of policies and transparency could go a long way.

Because of what they did to me, I am fully committed to completely destroy and expunge people who do these things from academia. If you, for whatever reason, would like to help me on this mission, shoot me an email, there’s a few ideas already taking shape towards that goal.

“Four of the top ten” research institutions is probably part of the reason for your experiences. I went to an elite private undergrad as a scholarship student and was sexually abused by the son of high powered lawyers, probably awful people themselves, who targeted scholarship students, international students, etc. because we were vulnerable with no recourse. I then went to a highly ranked but not super sexy public school for my PhD and my experience has been significantly better.

Bad actors are attracted to glamor and prestige because they’re part of the cloaks and levers they use to escape consequences. Bad actors are far less attracted to, just as an example, living in Wisconsin, Michigan, or Indiana and telling people at conferences that they work at UW rather than Cambridge. UCs are also vastly more welcoming and supportive of working and middle class students than HYPSM even at the graduate level. That doesn’t mean that you won’t find any assholes at these places, and go too low in the rankings and you’ll see ugly competition over scarce resources, but there’s a sweet spot where more honorable people who aren’t chasing prestige cluster and you’ll find more support and recourse. Public schools ranked 5-15 are best for students without significant, significant social savvy and other forms of protection, IMO.

> Public schools ranked 5-15 are best for students without significant, significant social savvy and other forms of protection

Do you think these are also the best for incoming faculty?

We are fine now. That was four years ago. Our embassy intervened and eventually she was released and we were able to fly back home.

I’m not 100% satisfied with how they handled the situation (they took a while to react to the issue) but in the end we were able to leave that place and I’m happy with that.

I had a feeling academia was just run a ran by people letting blatant fraud, exploitation and abuse of phd students, stealing during peer-review, and just other forms of plagiarism, fraud, and exploitation slide by. They let it slide by because correcting these things would lead to massive changes in academia that might put them out of jobs.

Every year that feeling becomes more certain. Glad I quit the track in grad school.

I feel terribly for all the incredibly smart and hard working academics that remain honest and try to make it work. They do what they love, otherwise they wouldn’t do such intensive work with so much sacrifice.

It is really disheartening too because academia only turns on the “honesty filter” when it comes to minor grad students that pissed off the wrong people. But you can do all this fraud constantly and become president of harvard if you know the right politics.

Dishonest lot. I hope karma is real so they get what is coming to them for taking advantage of people that just love to increase humanity’s knowledge.

They would be out of jobs.

You’re being downvoted because you’re correct—HN is an eco chamber for zealous regurgitation of opinions of the academy and media—institutions that have decayed. It’s been happening slowly for awhile, but now things are starting to come apart at the seams.

It is really annoying because a common response is

“We know academia is bad. But this is the best we have and it is hard to improve”

when that is false on two counts.

1. If you had said the same thing before 2016 or covid, people would not agree that academia is rife with fraud or worthy of skepticism.
2. The same people saying the system dismiss how can it be improved are the same ones that would suffer from disruption as you say. They have the power to dismiss these arguments to begin with.

You are being downvoted because you extrapolating from one fraud case to call all scientists dishonest.

I can do it too. Person named SpaceManNabs made a bad post. There for all posts by SpaceManNabs, and probably all posts on HackerNews are bad. A dishonest lot.

> from one fraud case to call all scientists dishonest

I specifically mention that the majority of scientists are not dishonest. The majority of scientists are not running academia. The majority of scientists are suffering from this system, to differing degrees.

If I were as rude as you, I’d extrapolate on reading ability, especially since it is not just one fraud case.

Regardless, even if I was wrong on that, all my other criticisms of academia still stand, like exploitation of the phd students. I really hope the grad student unions get what they want.

I appreciate your response though. Makes me feel confident that it is just salty people on HN that hate truth, because otherwise, why would you mischaracterize what I said?

Yes, pin it all on Masliah; turn him into a sort of bizzaro-jesus, who takes on the sins of the entire self-seeking, publish-or-perish, p-hacking, pharma-grifting, meta-meta-meta-analyzing, only-verify—at-gunpoint “profession”.

Unfortunately, sometimes someone becomes a bad example. That doesn’t make them a “scapegoat”, the favored defense of people like that.

A scapegoat is something that takes on all the sins of a lot of others who skate free. If Masliah is the only one who ever suffers, then he IS a scapegoat, but if this article serves to uncover a lot of other bad actors, then he’s not. And if his example serves to warn a lot of other scientists to clean up their acts, then his suffering is a benefit.

The language of the article is as low as it is loaded. This is just Derek Lowe covering for the fact that “Science” magazine and the like have let this scoundrel (and many more like him) carry on, without hindrance, for an entire career; pointing the finger anywhere and everywhere but at the journals themselves. None of this is an isolated incident. It is widespread! There is a new scapegoat every month.

Universities became tax funded and the consequences is warm bodies filling chairs. I have experience with a number of big name unis in the U.S. they are all about office and national politics. It’s not about the work and hasn’t been for a while now.

Defund universities. No more student loans, make them have to earn their place in the market or we will continue to suffer under the manipulated system that is actually killing students.

> Defund universities. No more student loans, make them have to earn their place in the market or we will continue to suffer under the manipulated system that is actually killing students.

This… it’s no longer about value its about optics… Problem exists in most industries now. The pendulum needs to swing back the other way before it’s too late to stop the decay…

It’s wrong to think that because there is reports of fraud or systematic error in science you shouldn’t trust it. I’m sure all those things exist. But they also exist in every other institution with a lot less self-reflection and self-correction.

Nassim Taleb said that people think weathermen are terrible predictors of the future. He says meteorology is among the most accurate sources of predictions in our lives. But we can easily validate it and we see the mistakes. If we had as much first hand experience with other types of predictions we’d appreciate the accuracy of weatherman. My point is: just because you know the flaws in a system don’t assume it isn’t better than another.

So one of the things you could do (if you were psychotic) is find poor people and replicate neurodegenerative diseases and then feed them the drug to see if the cure works. And then when they go to take MRIs you smuggle out the imaging from the Sutter Health clinic on Van Ness in San Francisco California.

Is that why my head hurts?

Because then it becomes attempted murder and torture overseas which is all sorts of jail time. Neat.

Why does Peter Teller Weyand at 555 Beule Street have a headache? My head hurts. So I just emailed a couple hundred academics across the country the same question.

I don’t like having my head hurt.

You May Also Like

More From Author