It's really a bummer to see this marketed as 'AI Discovers Something New'. The authors in the actual paper carried out an enormous amount of work, the vast majority of which is relatively standard biochemistry and cell biology - nothing to do with computational techniques. The AlphaFold3 analysis (the AI contribution) literally accounts for a few panels in a supplementary figure - it didn't even help guide their choice of small molecule inhibitors since those were already known. AlphaFold (among other related tools) is absolutely a game changer in structural biology and biophysics, but this is a pretty severe case of AI hype overshadowing the real value of the work.
bilekas 12 hours ago [-]
Yeah it’s a really strange title for the actual work, it’s like saying bic pens helped find x y z simply because they used them to take notes.
tim333 9 hours ago [-]
>“It really demanded modern AI to formulate the three-dimensional structure very precisely to make this discovery.”
It's not like bic pens. It's a new technique they couldn't do before that helped crack the mystery.
Also the title is "AI Helps..." not "AI Discovers" so that's kind of a strawman. I don't think anyone is denying the humans did great work. Maybe it's more like Joe Boggs uses the Hubble telescope to find a new galaxy and moaning because the telescope gets a mention.
I'm quite enthusiastic about the AI bit. My grandad died with alzheimer's 50 years ago. My sister is due to die of als in a couple of years. Both areas have been kind of stuck for decades. I'm hoping the AI modeling allows some breakthroughs.
avogt27 5 hours ago [-]
I think my problem is that this is maybe the most minimal and mundane use of AlphaFold, but it is treated like one of the main points of the paper. The small molecules they tested were already known to inhibit this enzyme, the structural modeling done based on AlphaFold is a minute part of the story compared to the dozens of incredibly difficult experiments they did - it almost seems the sort of thing one of the reviewers suggested during the initial submission and was added after the first round of edits.
I can't tell you how many times I've sat through talks where someone (usually ill-equipped to really engage with the research) suggests that the speaker tries AlphaFold for this or that without a clear understanding of what sort of biological insight they're expecting. It's also a joke at this point how often grad students plug their protein into AlphaFold and spend several minutes giving a half-baked analysis of the result. There are absolutely places where structure prediction is revolutionizing things including drug discovery, but can we acknowledge the hype when we see it?
I'm very sorry for your loss, my aunt is also declining due to this disease. I think statistically everyone either goes through it or becomes a caretaker if they live long enough.
aantix 15 minutes ago [-]
>I don't think anyone is denying the humans did great work
The title cites the AI contribution, not the human
bilekas 7 hours ago [-]
> Maybe it's more like Joe Boggs uses the Hubble telescope to find a new galaxy and moaning because the telescope gets a mention.
Maybe I've underestimated the impact the AI tooling has had then, because seems to me that your example wouldn't be an issue as it's literally the entire tool to discover.
> I'm hoping the AI modeling allows some breakthroughs.
I'm actually on board with you on this, I think it can be extrememly useful and really speed things up when dealing with such huge amount of complex data that needs to be worked with, my only gripe here was the title itself. It's seems forced when it could have been "Amazing breakthrough discovered to unravel cause of Alzheimer’s" - From here the main body of the article would match the title, with a nice shout out to a really creative use of AI.
thesz 2 hours ago [-]
> It's a new technique they couldn't do before that helped crack the mystery.
What about SAT-based solvers [1] for same problem?
Maybe more like “Excavator helps archaeologists discover new species”?
bonoboTP 9 hours ago [-]
Sure, when the excavator was new and could help do larger scale archeological excavations that were never possible before, then why not title it like that?
skeeter2020 56 minutes ago [-]
OK, then "New mega-sized excavator allows archeologist to process more material than before"?
tim333 9 hours ago [-]
I was thinking maybe like Van Leeuwenhoek uses glass gizmo to discover first microorganism. In that AI molecular simulation is a new tech which will probably get better and help many discoveries.
gosub100 3 hours ago [-]
Computer-assisted genomics leads scientists to discover...
trott 2 hours ago [-]
> The AlphaFold3 analysis (the AI contribution) literally accounts for a few panels in a supplementary figure - it didn't even help guide their choice of small molecule inhibitors since those were already known.
(Disclaimer: I'm the author of a competing approach)
With all the money cutting happening, I am not surprised they are joining the bandwagon to get some investors...
I just read some days ago here on HN an interesting link which shows that more than 70% of VC funding goes straight to "AI" related products.
This thing is affecting all of us one way or another...
rs186 8 hours ago [-]
Was going to say about the same thing. I have some background in biomedical research a while ago, and I could tell that on the high level the main body of the work here is similar to the methodology used in tons of research that were already done many years ago. People have already been using various machine learning/deep learning methods for a long time, and this is definitely not something significant that the headline tries to make or how people are perceiving it. Not to discount their work, but really, not too much to see for the average reader on the Internet.
In other words, this is something that happens in the field all the time, most of which don't get any attention from people outside the field, were it not because of the "AI" buzzword in the article.
discodonkey 8 hours ago [-]
I think the authors of this article probably sought to highlight the fact that AI is now being used in medical research, rather than credit it with all the work (see "helps unravel" as opposed to "unravels").
Majora320 1 hours ago [-]
ML/"AI" has been used in medical research for years and years, the buzzword headlines are a recent phenomenon.
HWR_14 7 hours ago [-]
The authors of this article probably sought to have their names and phrases like "AI powered research" published together.
sublimefire 11 hours ago [-]
Yes I do agree that much of the work was done using conventional methods and quite little was done with AI. AI model did do the folding though which was IMO critical to understand the structure and see the secondary substructure.
The title is clickbaity, it would be useful to stress that AI solves a very specific problem here that is extremely hard to do otherwise. It is like a lego piece.
avogt27 3 hours ago [-]
Several crystal structures of the catalytic domain of the protein had already been determined. The DNA binding domain of the protein which AlphaFold predicted is a relatively common fold that probably could have been figured out using homology modeling, which was common 10+ years ago. Even the small molecule docking used pretty old school computational techniques, and all but one drug interacted with the predetermined structures. The analysis was indeed aided by AI in the form of AlphaFold, but my guess is it sped a couple things up rather than making them possible.
mbgerring 5 hours ago [-]
Press releases like this are published for the purposes of securing funding. Medical research departments at universities are currently under siege by the federal government. Emphasizing the use of AI is a great way to avoid Elon Musk's search, replace and destroy operation for research funding.
avogt27 4 hours ago [-]
I agree that this is probably at least partially a motivation, but it seems like a losing strategy to me. AlphaFold is run by a private company, and falsely elevating the importance of its use in the paper could be used to fuel the argument that all this research needs to be privatized. Given the current situation, I hope people realize that the breakthrough in structure prediction is literally impossible without 70+ years data generated by publicly funded research. Most of the foundational work in deep learning guided structure prediction was also publicly funded, with Deep Mind getting in at the tail end of the race once it seemed like the problem could be brute forced by throwing enough resources at it.
yieldcrv 56 minutes ago [-]
At the end of the paper it says
> *These authors contributed equally
so your position is satisfied by listing an AI amongst those authors
AdventureMouse 8 hours ago [-]
I agree but it says something about the level of interest and confidence people have in the current state of Alzheimer’s research.
How many people would have read the article if it didn’t mention AI?
skeeter2020 55 minutes ago [-]
I have multiple comments here and didn't read the article regardless!
api 7 hours ago [-]
Historically it's "superstar researcher discovers something new" where the superstar researcher actually relies on the research of hordes of grad students and postdocs.
theptip 5 hours ago [-]
It’s “AI helps unravel”, not “AI discovers”. And it’s newsworthy, as AI-assisted discoveries are not yet boringly well-known.
I think it’s cool to see, and a good counterpoint to the “AI can’t do anything except generate slop” negativity that seems surprisingly common round here.
nonameiguess 2 hours ago [-]
It's helpful when reading these kinds of things to realize what you're reading. This isn't research. It's a press release. The author lists himself as a "Public Information Officer" for UC San Diego. Looking back through his article archives, it appears most, if not all, of the press releases make heavy emphasis of technology used by the research rather than anything about the research itself.
When I read the title of the article in my RSS feed my first instinct was to go straight to here with a snarky “How was it not actually AI that did this?” in my head…
As usual I was not disappointed.
SwtCyber 12 hours ago [-]
Honestly, the fact that the core discovery still relied so heavily on classic biochemistry and experimental validation actually makes it even more impressive to me
jamesrcole 11 hours ago [-]
[EDIT: people downvoting this, how about you explain what you object to in it]
> It's really a bummer to see this marketed as 'AI Discovers Something New'.
The headline doesn't suggest that. It's "AI Helps Unravel", and that seems a fair and accurate claim.
And that's true for the body of the article, too.
bGl2YW5j 12 hours ago [-]
Thanks for highlighting this
amelius 12 hours ago [-]
> The authors in the actual paper carried out an enormous amount of work, the vast majority of which is relatively standard biochemistry and cell biology - nothing to do with computational techniques.
OK but if the AI did all the non-standard work, then that's even more impressive, no?
rad_gruchalski 20 hours ago [-]
This is an interesting observation:
> With AI, they could visualize the three-dimensional structure of the PHGDH protein. Within that structure, they discovered that the protein has a substructure that is very similar to a known DNA-binding domain in a class of known transcription factors. The similarity is solely in the structure and not in the protein sequence.>
Reminds me of: if you come across a dataset you have no idea of what it is representing, graph it.
davidrupp 6 hours ago [-]
Whenever I see the term "AI" or similar, I mentally substitute the phrase "a lot of math, done very quickly", which is more concrete, and typically helps me sort out the stuff that still seems plausible, as in the sentence you quoted.
JensRantil 3 hours ago [-]
I always read "ML", because that's what 98% of all AI really is, but rebranded.
kylehotchkiss 19 hours ago [-]
Pardon my poor bio education but couldn’t this same outcome have been reached if the protein was xray crystallographyed?
colingauvin 19 hours ago [-]
I worked for a while on extremeophile Archaeal viruses - the type that infect organisms that manage to live in volcanic hot springs, for instance. These are ecological niches that are old, and extremely divergent. There's little genetic exchange between life around the hot springs, and life within them.
The typical route of discovering those viruses was first genetic. When you get a genome (especially back when this work was initiated), you'd BLAST all the gene sequences against all known organisms to look for homologs. That's how you'd annotate what the gene does. Much more often than not, you'd get back zero results - these genes had absolutely no sequence similarity to anything else known.
My PI would go through and clone every gene of the virus into bacteria to express the protein. If the protein was soluble, we'd crystallize it. And basically every time, once the structure was solved, if you did a 3D search (using Dali Server or PDBe Fold), there would be a number of near identical hits.
In other words, these genes had diverged entirely at the sequence level, but without changing anything at the structural (and thus functional) level.
Presumably, if AlphaFold is finding the relationship, there's some information preserved at the sequence level - but that could potentially be indirect, such as co-evolution. Either way, it's finding things no human-guided algorithm has been able to find.
Centigonal 16 hours ago [-]
> Presumably, if AlphaFold is finding the relationship, there's some information preserved at the sequence level
This is not my area of expertise, and maybe I'm misunderstanding this, but I thought that what AlphaFold does is extrapolate a structure from the sequence. The actual relationship with the other existing proteins would have been found by the investigators through other, more traditional means (like the 3D search you mentioned).
IX-103 8 hours ago [-]
I'm not sure about that. The way AlphaFold works involves transforming the protein from a vector space representing the sequence to a different vector space representing the folded structure and back again as it performs iterative refinement. Presumably you could perform a comparison in the structure space to find homologs that have completely different sequences - they would just have a high cosine similarity.
Checking sub-regions of the structure would be more difficult, but depending on how the structural representation works it could just be computationally intensive.
colingauvin 4 hours ago [-]
This is a very big misconception about AlphaFold. It's not generating a structure totally de novo from sequencing. Instead it's primarily finding relationships on the sequence level to other solved structures. If those structure/sequence relationships didn't exist somewhere, AF wouldn't work because it doesn't really have much information about protein folding from first principles. There are some small de novo elements, but nothing really groundbreaking. Where AF's true strength lies is in it's ability to detect relationships we have been unable to detect with any other method.
Centigonal 4 hours ago [-]
Wow, that makes sense. Thank you for explaining this -- it makes Alphafold a little less inexplicable magic and a little more science/engineering in my mind.
Teever 14 hours ago [-]
Can you explain to a layman how wildly different genes can produce identical proteins?
mtlmtlmtlmtl 11 hours ago [-]
IANAB, but from what I do understand. It depends what you mean by different genes. Information wise, DNA is a string of base 4 digits(nucleotides) in groups of 3 digits, these groups are called codons. Each codon corresponds to a specific amino acid*. A protein is made up of a bunch of different amino acids chained together. The gene determines which amino acids are chained together and in what order. This long chain of amino acids tends to fold up into a complex 3 dimensional structure, and this 3 dimensional structure determines the protein's function.
Now, there are a couple ways a gene could be different without altering the protein's function. It turns out multiple codons can code for the same amino acid. So if you switch out one codon for another which codes for the same amino acid, obviously you get a chemically identical sequence and therefore the exact same protein. The other way is you switch an amino acid, but this doesn't meaningfully affect the folded 3D structure of the finished protein, at least not in a way that alters its function. Both these types of mutations are quite common; because they don't affect function, they're not "weeded out" by evolution and tend to accumulate over evolutionary time.
* except for a few that are known as start and stop codons. They delineate the start and end of a gene.
clort 13 hours ago [-]
also a layman, but:
You could build houses from bricks, timber or poured concrete that all looked the same in the end. Their internal structures and methods of construction would be different, but they would have the same form.
I'm reading the GP's comment similarly.
DrAwdeOccarim 9 hours ago [-]
This is a perfect analogy.
Source: Am structural biochemist
SideburnsOfDoom 12 hours ago [-]
also a layman, but:
genes are instructions for building proteins.
For a given output, you could write a program in wildly different programming languages, or even use the same language but structure it in wildly different ways.
If there's no match for the source code (genes), then find a match for the output (protein).
colingauvin 4 hours ago [-]
There are basically 4 classes of amino acids:
Non-polar
Polar
Acidic
Basic
In terms of 3D fold - i.e. the general abstract shape of the protein in 3D, you can make loads of substitutions without changing it, generally as long as you stay within the same class.
It's not until you compare the 3D shape that you see the relationship.
im3w1l 18 hours ago [-]
What about convergent evolution? Are you ruling that out because you reason that there are many possible structures that could do the same job so it's too much of a coincidence how close it matches?
falcor84 19 hours ago [-]
Are you arguing that an in-silico result could have alternatively been achieved in-vitro? Yes, I suppose it could, but it sounds like that joke "A month in the laboratory can often save an hour in the library".
voxic11 17 hours ago [-]
Yes but that is a big if. xray crystallography is very hard and expensive and its not even always possible to create crystals of proteins.
cdf 10 hours ago [-]
I always believed that the AI/LLM/ML hysteria is misapplied to software engineering... it just happens to be a field adjacent to it, but not one that can very well apply it.
Medicine and Law, OTOH, suffers heavily from a fractal volume of data and a dearth of experts who can deal with the tedium of applying an expert eye to this much data. Imagine we start capturing ultrasound and chest xrays en masse, or giving legal advice for those who needs help. LLMs/ML are more likely to get this right, than writing computer code.
merksittich 6 hours ago [-]
Somehow, LLMs always seem to be "more likely to get this right" for fields other than one's own (I suppose, this being HN). The term "Andy Grove Fallacy" coined by Derek Lowe (whose articles are frequently posted here, the term being referenced in a recent piece[1]) comes to mind...
I figured the fallacy you were talking about was the one Michael Crichton describes about reading a newspaper article on a topic he knows about vs one he doesn't, but it turns out that's called the "Gell-Mann Amnesia effect." [1]
> You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. [...]
> In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
(My spouse was an ultrasound tech for many years.)
The problem with an example like ultrasound is that it's not a passive modality - you don't just take a sweep and then analyze it later. The tech is taking shots and adjusting as they go along to see things better in real time. There's all sorts of potential stuff in way often bowel and bones and you have to work around all that to see what you need to.
A lot of the job is actively analyzing what you're seeing while you're scanning and then going for better shots of the things you see, and having the experience and expertise to get the shots is the same skills required to analyze the images to know what shots to get. It's not just a matter of waving a wand around and then having the rad look at it later.
vonneumannstan 6 hours ago [-]
Techs take the scans but you need a Dr. to interpret them. Thats where AI can come in.
SketchySeaBeast 6 hours ago [-]
This is one of the many places where computer people simplify other professions.
Legally yes, the rad is the one interpreting it, but it's a very active process by the technologist. The ultrasound tech is actively interpreting the scans as they do them, and then using the wand to chase down what they notice to get better shots of things. If they don't see something the rad won't either, so you need that expertise there to identify things that don't look right, it's very real time and you can't do it post hoc.
vonneumannstan 30 minutes ago [-]
Did anyone suggest robots do ultrasounds? Who is simplifying it? Having literally just done one: the tech came in, basically said nothing and took a bunch of pictures and then Doc came in and interpreted the results.
SketchySeaBeast 6 minutes ago [-]
People are suggesting that AI interprets the images, which is a fundamental misunderstanding of the process, because the tech is making choices while taking the pictures of what the images should be of. You can't wait until the pictures are taken and given to the rad before the interpretation can begin, it has to be happening during the whole process. The question then is what is the place of the AI in that process? What is it automating?
discodonkey 8 hours ago [-]
When AI writes nonsensical code, it's a problem, but not a huge one. But when ChatGPT hallucinates while giving you legal/medical advice, there are tangible, severe consequences.
Unless there's going to be a huge reduction in hallucinations, I absolutely don't see LLMs replacing doctors or lawyers.
hermitShell 8 hours ago [-]
100% agree ‘chat bots’ will not be a revolutionary technology, but other uses of the underlying technology will be. General robotics, pharmaceuticals, new matter… and eventually 1st line medicines and law sure, but I sure don’t want doctors to vibe diagnose me, or lawmakers to vibe legislate.
IX-103 8 hours ago [-]
[Insert "let me laugh even harder" meme here]
That would be actual malpractice in either case.
LLMs have a history of fabricating laws and precedents when acting as a lawyer. Any advice from the LLM would likely be worse than just assuming something sensible, as that is more likely to reflect what the law is than what the LLM hallucinates it to be. Medicine is in many ways similar.
As for your suggestion to be capture and analyze ultrasounds and X-rays en-mass, that would be malpractice even if it were performed by an actual Doctor instead of an AI. We don't know the base rate of many benign conditions, except that they are always higher than we expect. The additional images are highly likely to show conditions that could be either benign or dangerous, and additional procedures (such as biopsies) would be needed to determine which it is. This would create additional anxiety in patients from the possible diagnosis and further pain and possible complications from the additional procedures.
While you could argue for taking these images and not acting on them, you would either tell the patients the results and leave them worried about what the discovered masses are (so they likely will have the procedures anyway) or you won't tell them (which has ethical implications). Good luck getting that past the institutional review board.
chairhairair 8 hours ago [-]
I don’t know what “Fractal volume of data” means exactly, but I think you’re underestimating how much more complicated biology is than software.
kjkjadksj 2 hours ago [-]
Well that is not how it is applied in the article at all
mobilejdral 20 hours ago [-]
Tying this to APOE, specifically e4 which has an increased requirement for choline and when choline levels are low there can be a metabolic push that leads to elevated PHGDH activity and consequently, increased serine synthesis. That is a neat connection and maybe why when we study choline supplements we see positive results.
That is super interesting, as is the relationship between choline and sleep. With restorative sleep function, and specifically slow-wave activity, considered to be a significant driver of AD.
Yeah, part of my work is in dementia with slow-wave sleep enhancement, and we're putting together a menopause study hopefully to start in 2026.
xlbuttplug2 11 hours ago [-]
> In conclusion, our findings suggest that moderate dietary choline intake, ranging from 332.89 mg/d to 353.93 mg/d, is associated with lower odds of dementia and better cognitive performance.
Gemini tells me that amounts to ~850mg of alpha GPC or ~1900mg of citicoline. Eggs it is then.
criddell 9 hours ago [-]
How are you going to check Gemini’s math on that?
Claude tells me that’s 4-5 eggs per day or 5x150 mg alpha gpc capsules.
The eggs would be a lot more expensive in both time and materials plus most egg farms seem cruel (especially male chick killing)… I’m leaning towards alpha gpc supplements.
xlbuttplug2 3 hours ago [-]
> How are you going to check Gemini’s math on that?
Gemini used 40% choline by weight for alpha GPC and 18% for citicoline, which seems to check out with other sources.
> I’m leaning towards alpha gpc supplements.
I haven't looked into the studies recently, but there have been some negative findings with alpha GPC supplementation[1]. May be worth a gander.
Does a 100% safe and effective source of choline exist? Maybe a combination of eggs and supplements are the way to go?
xlbuttplug2 25 minutes ago [-]
This stuff goes over my head, but perhaps one can take something to lower TMAO (I see EVOO, allicin, resveratrol, PQQ from a cursory search) to offset choline supplementation. This is assuming TMAO is the cause of increased disease risk and not just a biomarker.
j45 16 hours ago [-]
wow, thanks for sharing.
pedalpete 20 hours ago [-]
Its good to see them classifying this as for "late onset Alzheimer's".
There is a theory that Alzheimer's as we currently understand it, is not one disease, but multiple diseases that are lumped into one category because we don't have an adequate test.
This is also where some of the controversy surrounding the Amyloid hypothesis comes from.
jvans 19 hours ago [-]
The controversy over the amyloid hypothesis comes from a Stanford professor faking data[1] and setting the field back decades. The amount of harm this individual caused is hard to overstate. He is also still employed by Stanford.
It's actually pretty easy to overstate the amount of harm caused by that one individual... you're doing it.
There are lots of good reasons to believe in the amyloid hypothesis, and no paper or even line of research is the one bedrock of the hypothesis. It was the foundational bedrock of Alzheimer's research back in the early 1990s (essentially, before Alzheimer's became one of the holy grail quests of modern medicine), after all; well before any of the fraudulent research into Alzheimer's was done.
The main good reason not to believe in amyloid is that every drug targeting amyloid plaques has failed to even slow Alzheimer's, even when they do impressive jobs in clearing out plaques--and that is a hell of a good reason to doubt the hypothesis. But no one is going to discover that failure until you have amyloid blockers read out their phase III clinical trial results, and that doesn't really happen until about a decade ago.
DavidSJ 12 hours ago [-]
every drug targeting amyloid plaques has failed to even slow Alzheimer's
Lecanemab and donanemab succeeded in slowing Alzheimer’s.
I know that it is very important for HN folks to be angry. But as someone who has a parent with this disease, I would like to be certain that the amyloid hypothesis is definitely not correct before we throw it entirely out with the bathwater. These simplified “one researcher caused an entire field to go astray for decades” explanations are much too pat for me to have any confidence in them.
jvans 18 hours ago [-]
A lot of people should be mad at Marc Tessier-Lavigne, not just HN folks. He lied for personal gain at the expense of scientific progress and millions of patients who suffer
tim333 9 hours ago [-]
I'm hoping AI may improve things by being programed to optimised for scientific discovery rather than fame and money.
wizzwizz4 4 hours ago [-]
We're a very long way away from systems that work like this.
DaiPlusPlus 17 hours ago [-]
> These simplified “one researcher caused an entire field to go astray for decades” explanations are much too pat for me to have any confidence in them.
Right, monocausal explanations in-general will set-off my skept-o-sense too; but then my mind made me think of another example: Andrew Wakefield (except that AW succeeded more at convincing Facebook-moms than the scientific establishment - but still harmed society just as much, IMO)
razakel 10 hours ago [-]
Wakefield's fraud was quite sophisticated and did manage to fool many medical professionals.
adastra22 15 hours ago [-]
The amyloid hypothesis is absolutely not correct. We know this unequivocally.
Amyloid deposits correlate with Alzheimer’s, but they do not cause the symptoms. We know this because we have drugs which (in some patients, not approved for general use) completely clear out amyloids, but have no affect on symptoms or outcomes. We have other very promising medications that do nothing to amyloids. We also have tons of people who have had brain autopsies for other reasons and found to have very high levels of amyloid deposits, but no symptoms of dementia prior to death.
Alzheimer’s isn’t caused by amyloids.
polskibus 14 hours ago [-]
I’m interested in this and it seems you have done your homework. Would you mind sharing some references?
pedalpete 16 hours ago [-]
My uncle died of the disease, and I work in neurotech/sleeptech, specifically in slow-wave enhancement which is showing promise in Alzheimer's.
I 100% agree with you that we shouldn't throw the baby out with the bathwater on this one. Data being falsified and the hypothesis being wrong are two different things.
literalAardvark 11 hours ago [-]
Can you recommend a reliable source for sleep improvement?
The internet is awash in random garbage and it'd be interesting to have a link that someone who actually sees sleep EEGs thinks is "80% there".
Re: Link, just to lower your load in answering.
apwell23 15 hours ago [-]
is anyone pursuing the hypothesis then?
Aurornis 15 hours ago [-]
> These simplified “one researcher caused an entire field to go astray for decades” explanations are much too pat for me to have any confidence in them.
Anyone who believes that an entire field and decades of researched pivoted entirely around one researcher falsifying data is oversimplifying. The situation was not good, but it’s silly to act like it all came down to this one person and that there wasn’t anything else the industry was using as their basis for allocating research bets.
dev1ycan 18 hours ago [-]
Researchers spent decades already on it and couldn't get results for a reason.
bawolff 16 hours ago [-]
Regardless, it is still important not to fall into the fallacy fallacy (just because someone made a bad argument for something does not imply that the conclusion is neccesarily false)
16 hours ago [-]
SwtCyber 12 hours ago [-]
The more we learn, the more it feels like "Alzheimer's" is just a convenient label for a bunch of different underlying pathologies that happen to look similar on the surface.
jedberg 16 hours ago [-]
This is a strong argument for universal healthcare. If we had universal healthcare in the USA, we'd have to have a common charting protocol and a medical chart exchange.
One thing that AI/ML is really good at is taking very large datasets and finding correlations that you wouldn't otherwise. If everyone's medical chart were in one place, you could find things like "four years before presenting symptoms of pancreatic cancer, patients complain of increased nosebleeds", or things like that.
Of course we don't need universal healthcare to have a chart exchange, and the privacy issues are certainly something that needs consideration.
But the point is, I suspect we could find cures and leading indicators for a lot of diseases if everyone's medical records were available for analysis.
twobitshifter 7 hours ago [-]
It’s also a strong argument (maybe stronger) for both Federal Funding of health research and encouraging international students to study and complete phDs in the US.
> The study co-authors (from left to right) Sheng Zhong, Junchen Chen, Wenxin Zhao, Ming Xu, Shuanghong Xue, Zhixuan Song and Fatemeh Hadi
>This work is partially funded by the National Institutes of Health (grants R01GM138852, DP1DK126138, UH3CA256960, R01HD107206, R01AG074273 and R01AG078185).
Universal Health Care would be great but we are at a place where the research itself may vanish from the US.
OJFord 11 hours ago [-]
Leaving aside common EHR / central database being orthogonal to universal healthcare, as addressed in sibling comments, having this data centrally still doesn't even make this as easy as you hope.
'patient complains of increased nosebleeds' isn't structured data you can query (or feed to ML) like that. It actually takes a physician having this kind of hypothesis, to then trawl through the records, reading unstructured notes, creating their own database for the purpose - you know, had/did not have nosebleed, developed/did not develop pancreatic cancer within 4 years, or whatever - so then they can do the actual analysis on the extracted data.
Where I think LLMs could indeed be very helpful is in this data collection phase: this is the structured data I want, this is the pile of notes, go. (Then you check some small percentage of them and if they're correct assume the rest are too. There's already huge scope for human error here, so this seems acceptable.)
smallnix 16 hours ago [-]
Universal healthcare and having everyones medial chart stored centrally can be related, but must not be. There are many countries with some form of universal healthcare and no centralized records.
jedberg 15 hours ago [-]
> There are many countries with some form of universal healthcare and no centralized records.
I believe you, but I'm curious how that works. When you go to a random doctor, do they have to request your records from all your other doctors? Similar to here in the USA when you have a PPO?
seszett 15 hours ago [-]
There are several different things here.
One, in some of the countries I know (with universal healthcare and no centralised records) you don't go to a random doctor. You have a declared family doctor and you have to go to them unless they are unavailable, in which case the other doctor you go to has to declare that you couldn't go to your doctor. It's a small hurdle to prevent doctor shopping, but it means people are more likely to always see the same doctor. Specialists are given the relevant information by the family doctor when referring a patient to a specialist, and in most other cases records are not really needed, or the ER will contact whoever to get the information they think they need. It might sound hazardous but in practice it works fine.
Second, some places have centrally-stored records but the access is controlled by the patient. Every access to the record is disclosed to the patient and he has the possibility to revoke access to anyone at any time. That generally goes together with laws that fundamentally oppose any automated access or sharing of these records to third parties.
And third, I don't understand what any of this has to do with who whether healthcare access is universal or not? Universal healthcare without centralised records exists (in France, unless it has changed in recent years, but it at least existed for 60 years or so) and centralised records without universal healthcare could exist (maybe privately managed by insurance companies, since the absence of universal healthcare would indicate a pretty disengaged state).
adastra22 15 hours ago [-]
Yes. What’s surprising about this? The two topics seem orthogonal.
Universal healthcare is about who is paying, not necessarily about who is running the service.
Until very recently this was the case in Australia. If you started going to a different doctor you had to sign a form authorising record transfer.
This was somewhat annoying since unlike the UK system, the Australian system is essentially private GPs getting paid for your individual appointments by the government (so called bulk billing), so there's no guarantee that you can go to the same doctor every time.
razakel 10 hours ago [-]
That is the UK system. GPs are private contractors.
ViscountPenguin 37 minutes ago [-]
In the UK you have to choose a GP clinic, which you're stuck with until you get a transfer. This isn't the case in Australia, which is the difference I was trying to highlight.
cmrdporcupine 1 hours ago [-]
In the Canadian system doctors are still on the whole private practices. They just bill the government (the "single payer") instead of an insurance company. And they bill based on standardized payment formula decided by the government.
Basically, government funded and regulated doesn't mean government run.
There is no standardized EHR system here, despite provincial governments (which are who runs the systems) wasting millions over the last two decades trying to make that happen.
grepfru_it 15 hours ago [-]
>in the USA when you have a PPO
This was the last decades way of doing things. The current decade is to stay within the desired charting system. That way you can one-click share data between doctors. Typically you would search for doctors that utilize the same charting platform. EPIC is probably the largest one in US today
piotrkaminski 15 hours ago [-]
That certainly used to be the case in Canada 20 years ago, don't know if they've standardized since.
threatripper 16 hours ago [-]
Universal Healthcare is neither necessary nor sufficient for this. It's mostly about data protection laws why this doesn't work in Europe.
Calamitous 15 hours ago [-]
> If we had universal healthcare in the USA, we'd have to have a common charting protocol and a medical chart exchange.
Isn't this exactly what HIPAA was supposed to address?
6 hours ago [-]
apwell23 15 hours ago [-]
can't they simply create a law for that instead of universal healthcare.
cwmoore 3 hours ago [-]
They who? Simply how?
I hope the author of this comment has another area of expertise.
xyst 14 hours ago [-]
EHRs (electronic health records) were suppose to be the "common charting protocol" back when ACA passed.
Unfortunately so many junk systems were pushed to the market and the "common charting protocol" is highly dependent on the EHR used by the hospital system.
There _was_ supposed to be some interoperability between EHRs but I honestly haven’t been following it for quite some time.
As for availability of medical history to researchers, I highly doubt this will happen.
Big tech has ruined the trust between people and technology. People gave up their data to G, MS, FB, and others for many years.
We have yet to see any benefit for the common man or woman. Only the data is used against us. Used to divide us (echo chambers). Used to manipulate us (buy THIS, hate that, anti WoKe). Used to control uneducated and vulnerable population. Used to manipulate elections. Used to enrich the billionaire class.
insin 19 hours ago [-]
It's a pity the ridiculous level of LLM overhype from those chasing investment and profit is dragging "AI" through the mud with it
01100011 15 hours ago [-]
The ridiculous hype behind LLMs, like 3D gaming before it, is helping to pay for the advances in HW that enable this "AI".
psyclobe 6 hours ago [-]
A little to late for my mom, but maybe it will help me in the future...
mclau157 5 hours ago [-]
In what ways will you apply it?
fencepost 4 hours ago [-]
... Presumably by being someone with a family history of dementia and hopes for effective preventive measures and treatments which would presumably be taken orally, injected or infused?
dsign 11 hours ago [-]
This piece of the puzzle, and its finding, if confirmed, is very neat. But I think we are barking at the wrong tree, because senescence is inherently chaotic. Sometimes we identify a disease with a set of common symptoms because there are many alternative causes that lead to those very symptoms. It's like "convergent symptoms", so to speak.
If I had any funding to work freely in these subjects, I would instead focus on the more fundamental questions of computationally mapping and reversing cellular senescence, starting with something tiny and trivial (but perhaps not tiny nor trivial enough) like a rotifer. My focus wouldn't be the biologists' "we want to understand this rotifer", "or we want to understand senescence", but more "can we create an exact computational framework to map senescence, a framework which can be extended and applied to other organisms"?
Sadly, funding for science is a lost cause, because even where/when it is available, it comes with all sort of political and ideological chains.
po 11 hours ago [-]
Have you ever lived with or helped a person with AD? It's not cellular senescence. What you're talking about is fine and well, but AD is a devastating disease that has very particular symptoms. We may not know all of the causes, but reversing cellular senescence isn't going to solve this.
Researching and curing AD is not barking up the wrong tree. There is a horrible deadly monster in that tree that needs defeating. I hope people also get scientific funding for other age-related issues.
I wonder if they used the output of alpha fold? Remember that Deepmind published the 3D structure of hundreds of millions of proteins for FREE. Imagine if they walled off that data behind an Elsevier like subscription wall? They shoould credit Deep Mind at least
colingauvin 19 hours ago [-]
AlphaFold is regurgitating structural information from 10s of thousands of experimental structures acquired at great cost and published to the PDB for free, with no license restrictions of any kind.
derektank 15 hours ago [-]
You make a fair point but much of that work was funded by public grants, while AlphaFold was privately funded. Those generally come with different expectations
falcor84 19 hours ago [-]
Good point, the article [0] does mention AlphaFold but doesn't cite it.
looks like it - it's such a minor and brief mention in the paper for the article to focus on it so much lol. They probably should have cited it, looks like they decided it was minor enough (or forgot) that they didn't put it in their software used/citation. Super commonly used though I wouldn't be stunned if most of its uses never got cited- just a quick check if it thinks deleting some section or doing some sort of fusion is gonna cause a problem, or if you've got something without a PDB structure finding a site to mess with that looks like it's not gonna cause any problems. You can't count on it blindly obviously but it's super helpful. Like if it's pretty confident about some section of a protein it hasn't seen before, the weird stuff you're studying might not be folded properly by the model, but if you want to stick a handle onto the protein to grab it with whatever it can let you know where's the least likely to be a waste of time and money to try.
HarHarVeryFunny 19 hours ago [-]
> With AI, they could visualize the three-dimensional structure of the PHGDH protein.
Sure sounds like it.
DaiPlusPlus 16 hours ago [-]
While I (loosely) understand the concept of using a custom (foundational?) machine-learning model to explore some problem-space and devise solutions, I don't understand why it says they used "AI" to "visualize" a structure. A layperson is going to think they simply asked ChatGPT to solve the problem for them and it just worked and now OpenAI owns the cure for Alzheimer's.
...I ask because bio/chem visualization and simulation was a solved problem back in the 1980s (...back when bad TV shows used renders of spinning organic-chemistry hexagons on the protagonist's computer as a visual-metaphore for doing science!).
tibbar 15 hours ago [-]
Protein folding is one of the oldest and hardest problems in computational biology. It is fair to describe the result of protein folding as a 3D model/visualization of the protein. DeepMind's AlphaFold was a big breakthrough in determining how arbitrary structures are folded. Not always correct, but when it is, often faster and cheaper than traditional methods. I believe the latest versions of AlphaFold incorporate transformers, but it's certainly not a large language model like ChatGPT.
netdevphoenix 9 hours ago [-]
The 202x trend of adding AI into every story when is irrelevant is getting tiresome
xyst 14 hours ago [-]
The author here is shoehorning "AI" into the headline to boost views. Quite misleading.
They identified a possible upstream pathway that could help treat disease and build therapeutic treatments for Alzheimer’s.
I don’t know about you all but I’m tired of the AI-mania. At least author didn’t but "blockchain" in the article.
SwtCyber 12 hours ago [-]
Curious to see how this line of research evolves, especially once they get into clinical trials
yapyap 11 hours ago [-]
AI steals credit for unraveling a cause of Alzheimers so the bubble will exist a bit longer
bitwize 19 hours ago [-]
I'm an AI skeptic but this is AI doing its job.
Because there's AI as in "letting ChatGPT do the hard bits of programming or writing for me", for which it is woefully unsuited, and there's AI as in using machine learning as a statistical approach, which it fundamentally is. It's something you can pour data into and let the machine find how the data clump together, so you can investigate potential causative relationships the Mark I eyeball might have missed.
I'm excited for the possibilities these uses of AI might bring.
brailsafe 18 hours ago [-]
Agreed. I have barely any enthusiasm in the former aside from potential time savings, but have always been fascinated in the applications of what I think of as scientific deep large scale pattern matching in domains that aren't practical to tackle with raw human labor.
jasonkester 14 hours ago [-]
I notice that I have a form of Gell-Mann amnesia for this sort of thing. Do we need a new term, or does that cover it?
Because I find myself nodding along with optimism, having two grandfathers that died from this disease. It’d be great if something could sift through all the data and come up with a novel solution.
Then I remember that this is the same technology that eagerly tries to autocomplete every other line of my code to include two nonexistent variables and a nonexistent function.
I hope this field has some good people to sanity check this stuff.
bitwize 14 hours ago [-]
"I was optimistic when my friend told me about his new hammer and how much it helped him assemble a cabinet he was working on.
Then I remember that this is the same technology that failed to drive in screws for a project I was working on a week ago."
The AI that's being used in applications like this is not generative AI. It really is just "sparkling statistics" and it's tremendously useful in applications like this because it can accelerate the finding of patterns in data that form the basis of new discoveries.
dudeinjapan 6 hours ago [-]
If AI causes us humans to workout our brains less, maybe it is also causing Alzheimer's. In the words of Homer Simpson: "To alcohol! The cause of--and solution to--all life's problems."
This article is trashy trash trash. The only mention of AI in the actual paper is that they used ChatGPT for grammar correction. The article doesn't explain what or how AI was used beyond "three dimensional modeling".
A paper author did quote the use of AI. But without explaining precisely how AI was used and why it was valuable this article is basically clickbait trash. Was AI necessary for their key result? If so how and why? We don't know!
Everything about this screams "just say AI and we'll get more attention".
AIPedant 16 hours ago [-]
That's not true, the paper used AlphaFold 3. The disclaimer is about generative AI, not AI writ large.
I agree the UCSD writeup is pretty misleading; the authors used protein-modeling software, which is really not very interesting, and the fact that the SOTA protein modeler uses machine learning is not at all relevant to this specific paper.
forrestthewoods 15 hours ago [-]
> That's not true, the paper used AlphaFold 3
Ah yeah I skimmed and searched for “AI” so missed that. The UCSD article does not contain the term “AlphaFold” so yeah they’re definitely engagement baiting.
devmor 17 hours ago [-]
Hell yes! This is where machine learning shines and I’m so happy to see another incredible breakthrough from using it in medical science.
It’s a nice reprieve from “we’re using a chatbot as a therapist and it started telling people to kill themselves” type news.
maggiepatells42 4 hours ago [-]
[dead]
gaopeng860330 11 hours ago [-]
[dead]
ganterich 15 hours ago [-]
Minor nitpick about the headline. AI didn't help, it was used to identify a therapeutic candidate. I dislike personification of AI because people treat it as something religious already. AI doesn't do anything, the people using it do.
mgraczyk 15 hours ago [-]
"seatbelts help people survive car accidents"
This is a completely normal way to talk about inanimate objects
kazinator 15 hours ago [-]
The difference is that in the case of a seatbelt, people won't make a personified interpretation of "help". They know they need to make the tool-like interpretation, because it's all that makes sense.
mgraczyk 14 hours ago [-]
"shovels help people dig holes"
derektank 15 hours ago [-]
I share your concern with anthropomorphizing AI tools but I don't think this is really a serious example. It seems fairly common to say a tool, even a rudimentary one, helps it's user in English. Spears helped hunter gatherers out compete other apex predators, road networks helped Rome maintain a large empire, solar panels help us reduce carbon emissions, etc.
ganterich 14 hours ago [-]
I aggree with your point and the first comment about seatbelts, thanks for that perspective.
The reason I personally still see an issue is that personification of AI is an actual issue, while personification of seatbelts and spears is not.
This lands on the frontpage of hackernews, and people don't read every article on the frontpage, but they read the headlines, and the headlines carry latent messages that get interpreted.
And with the huge hype around LLMs, AI for most people means LLMs. So most people will read this headline and not the article and subconsciously concluce that researchers used LLMs to solve Alzheimers, further strengthening their religious belief LLMs were mystical oracles of truth, potentially able to solve widespread deadly diseases, while not realizing that the researchers might have used highly specialized machine learning or what not instead of text completion algorithms.
codr7 10 hours ago [-]
It's a hell of a lot more likely to find the primary causes of Alzheimers outside of the human body imo, medicines/pollution/additives/radiation etc.
The human body is a pretty amazing construction, nature doesn't make a lot of mistakes.
mihalycsaba 8 hours ago [-]
It makes a lot of mistakes. Without modern medicine we wouldn't live this long. Alzheimer's is not a modern disease.
hello_computer 4 hours ago [-]
The primary improvement to human life expectancy was not medicine, but public sanitation. Prior to that, death was bi-modal, with peaks at infancy and old age. When people (and doctors) started washing their hands, the infant peak was flattened. Dead toddlers really kick the average in the nuts.
hello_computer 4 hours ago [-]
There is a strong correlation (noting my word choice here before the incorrectors accuse me of confusing it with causation) between general anesthesia and onset of dementia. I’m not going to dig it up, but it is easy enough to find on pubmed. There is a decent amount of noticing in the literature.
It's not like bic pens. It's a new technique they couldn't do before that helped crack the mystery.
Also the title is "AI Helps..." not "AI Discovers" so that's kind of a strawman. I don't think anyone is denying the humans did great work. Maybe it's more like Joe Boggs uses the Hubble telescope to find a new galaxy and moaning because the telescope gets a mention.
I'm quite enthusiastic about the AI bit. My grandad died with alzheimer's 50 years ago. My sister is due to die of als in a couple of years. Both areas have been kind of stuck for decades. I'm hoping the AI modeling allows some breakthroughs.
I can't tell you how many times I've sat through talks where someone (usually ill-equipped to really engage with the research) suggests that the speaker tries AlphaFold for this or that without a clear understanding of what sort of biological insight they're expecting. It's also a joke at this point how often grad students plug their protein into AlphaFold and spend several minutes giving a half-baked analysis of the result. There are absolutely places where structure prediction is revolutionizing things including drug discovery, but can we acknowledge the hype when we see it?
I'm very sorry for your loss, my aunt is also declining due to this disease. I think statistically everyone either goes through it or becomes a caretaker if they live long enough.
The title cites the AI contribution, not the human
Maybe I've underestimated the impact the AI tooling has had then, because seems to me that your example wouldn't be an issue as it's literally the entire tool to discover.
> I'm hoping the AI modeling allows some breakthroughs.
I'm actually on board with you on this, I think it can be extrememly useful and really speed things up when dealing with such huge amount of complex data that needs to be worked with, my only gripe here was the title itself. It's seems forced when it could have been "Amazing breakthrough discovered to unravel cause of Alzheimer’s" - From here the main body of the article would match the title, with a nice shout out to a really creative use of AI.
[1] https://ieeexplore.ieee.org/document/5361301
Would that technique do the same? If not, why?
(Disclaimer: I'm the author of a competing approach)
Searching for new small-molecule inhibitors requires going through millions of novel compounds. But AlphaFold3 was evaluated on a dataset that tends to be repetitive: https://olegtrott.substack.com/p/are-alphafolds-new-results-...
I just read some days ago here on HN an interesting link which shows that more than 70% of VC funding goes straight to "AI" related products.
This thing is affecting all of us one way or another...
In other words, this is something that happens in the field all the time, most of which don't get any attention from people outside the field, were it not because of the "AI" buzzword in the article.
The title is clickbaity, it would be useful to stress that AI solves a very specific problem here that is extremely hard to do otherwise. It is like a lego piece.
> *These authors contributed equally
so your position is satisfied by listing an AI amongst those authors
How many people would have read the article if it didn’t mention AI?
I think it’s cool to see, and a good counterpoint to the “AI can’t do anything except generate slop” negativity that seems surprisingly common round here.
Go the current very last page and he's hyping up nanotech in 2015, which as far as I'm aware, didn't end up panning out or really going anywhere. https://today.ucsd.edu/archives/author/Liezel_Labios/P260
As usual I was not disappointed.
> It's really a bummer to see this marketed as 'AI Discovers Something New'.
The headline doesn't suggest that. It's "AI Helps Unravel", and that seems a fair and accurate claim.
And that's true for the body of the article, too.
OK but if the AI did all the non-standard work, then that's even more impressive, no?
> With AI, they could visualize the three-dimensional structure of the PHGDH protein. Within that structure, they discovered that the protein has a substructure that is very similar to a known DNA-binding domain in a class of known transcription factors. The similarity is solely in the structure and not in the protein sequence.>
Reminds me of: if you come across a dataset you have no idea of what it is representing, graph it.
The typical route of discovering those viruses was first genetic. When you get a genome (especially back when this work was initiated), you'd BLAST all the gene sequences against all known organisms to look for homologs. That's how you'd annotate what the gene does. Much more often than not, you'd get back zero results - these genes had absolutely no sequence similarity to anything else known.
My PI would go through and clone every gene of the virus into bacteria to express the protein. If the protein was soluble, we'd crystallize it. And basically every time, once the structure was solved, if you did a 3D search (using Dali Server or PDBe Fold), there would be a number of near identical hits.
In other words, these genes had diverged entirely at the sequence level, but without changing anything at the structural (and thus functional) level.
Presumably, if AlphaFold is finding the relationship, there's some information preserved at the sequence level - but that could potentially be indirect, such as co-evolution. Either way, it's finding things no human-guided algorithm has been able to find.
This is not my area of expertise, and maybe I'm misunderstanding this, but I thought that what AlphaFold does is extrapolate a structure from the sequence. The actual relationship with the other existing proteins would have been found by the investigators through other, more traditional means (like the 3D search you mentioned).
Checking sub-regions of the structure would be more difficult, but depending on how the structural representation works it could just be computationally intensive.
Now, there are a couple ways a gene could be different without altering the protein's function. It turns out multiple codons can code for the same amino acid. So if you switch out one codon for another which codes for the same amino acid, obviously you get a chemically identical sequence and therefore the exact same protein. The other way is you switch an amino acid, but this doesn't meaningfully affect the folded 3D structure of the finished protein, at least not in a way that alters its function. Both these types of mutations are quite common; because they don't affect function, they're not "weeded out" by evolution and tend to accumulate over evolutionary time.
* except for a few that are known as start and stop codons. They delineate the start and end of a gene.
You could build houses from bricks, timber or poured concrete that all looked the same in the end. Their internal structures and methods of construction would be different, but they would have the same form.
I'm reading the GP's comment similarly.
Source: Am structural biochemist
genes are instructions for building proteins.
For a given output, you could write a program in wildly different programming languages, or even use the same language but structure it in wildly different ways.
If there's no match for the source code (genes), then find a match for the output (protein).
Non-polar
Polar
Acidic
Basic
In terms of 3D fold - i.e. the general abstract shape of the protein in 3D, you can make loads of substitutions without changing it, generally as long as you stay within the same class.
It's not until you compare the 3D shape that you see the relationship.
Medicine and Law, OTOH, suffers heavily from a fractal volume of data and a dearth of experts who can deal with the tedium of applying an expert eye to this much data. Imagine we start capturing ultrasound and chest xrays en masse, or giving legal advice for those who needs help. LLMs/ML are more likely to get this right, than writing computer code.
[1] https://www.science.org/content/blog-post/end-disease
> You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. [...]
> In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
1. https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect
The problem with an example like ultrasound is that it's not a passive modality - you don't just take a sweep and then analyze it later. The tech is taking shots and adjusting as they go along to see things better in real time. There's all sorts of potential stuff in way often bowel and bones and you have to work around all that to see what you need to.
A lot of the job is actively analyzing what you're seeing while you're scanning and then going for better shots of the things you see, and having the experience and expertise to get the shots is the same skills required to analyze the images to know what shots to get. It's not just a matter of waving a wand around and then having the rad look at it later.
Legally yes, the rad is the one interpreting it, but it's a very active process by the technologist. The ultrasound tech is actively interpreting the scans as they do them, and then using the wand to chase down what they notice to get better shots of things. If they don't see something the rad won't either, so you need that expertise there to identify things that don't look right, it's very real time and you can't do it post hoc.
Unless there's going to be a huge reduction in hallucinations, I absolutely don't see LLMs replacing doctors or lawyers.
That would be actual malpractice in either case.
LLMs have a history of fabricating laws and precedents when acting as a lawyer. Any advice from the LLM would likely be worse than just assuming something sensible, as that is more likely to reflect what the law is than what the LLM hallucinates it to be. Medicine is in many ways similar.
As for your suggestion to be capture and analyze ultrasounds and X-rays en-mass, that would be malpractice even if it were performed by an actual Doctor instead of an AI. We don't know the base rate of many benign conditions, except that they are always higher than we expect. The additional images are highly likely to show conditions that could be either benign or dangerous, and additional procedures (such as biopsies) would be needed to determine which it is. This would create additional anxiety in patients from the possible diagnosis and further pain and possible complications from the additional procedures.
While you could argue for taking these images and not acting on them, you would either tell the patients the results and leave them worried about what the discovered masses are (so they likely will have the procedures anyway) or you won't tell them (which has ethical implications). Good luck getting that past the institutional review board.
https://www.sciencedirect.com/science/article/pii/S000291652...
https://www.jarlife.net/3844-choline-sleep-disturbances-and-...
PEMT (phosphatidylethanolamine N-methyltransferase) is what makes choline in the body, but it depends on estrogen.(https://pmc.ncbi.nlm.nih.gov/articles/PMC3020773/)
Gemini tells me that amounts to ~850mg of alpha GPC or ~1900mg of citicoline. Eggs it is then.
Claude tells me that’s 4-5 eggs per day or 5x150 mg alpha gpc capsules.
The eggs would be a lot more expensive in both time and materials plus most egg farms seem cruel (especially male chick killing)… I’m leaning towards alpha gpc supplements.
Gemini used 40% choline by weight for alpha GPC and 18% for citicoline, which seems to check out with other sources.
> I’m leaning towards alpha gpc supplements.
I haven't looked into the studies recently, but there have been some negative findings with alpha GPC supplementation[1]. May be worth a gander.
[1] https://examine.com/supplements/alpha-gpc/#what-are-alpha-gp...
https://pubmed.ncbi.nlm.nih.gov/38733921/
Does a 100% safe and effective source of choline exist? Maybe a combination of eggs and supplements are the way to go?
There is a theory that Alzheimer's as we currently understand it, is not one disease, but multiple diseases that are lumped into one category because we don't have an adequate test.
This is also where some of the controversy surrounding the Amyloid hypothesis comes from.
[1] https://stanforddaily.com/2023/07/19/stanford-president-resi...
There are lots of good reasons to believe in the amyloid hypothesis, and no paper or even line of research is the one bedrock of the hypothesis. It was the foundational bedrock of Alzheimer's research back in the early 1990s (essentially, before Alzheimer's became one of the holy grail quests of modern medicine), after all; well before any of the fraudulent research into Alzheimer's was done.
The main good reason not to believe in amyloid is that every drug targeting amyloid plaques has failed to even slow Alzheimer's, even when they do impressive jobs in clearing out plaques--and that is a hell of a good reason to doubt the hypothesis. But no one is going to discover that failure until you have amyloid blockers read out their phase III clinical trial results, and that doesn't really happen until about a decade ago.
Lecanemab and donanemab succeeded in slowing Alzheimer’s.
As did gantenerumab in a recent prevention trial: https://www.alzforum.org/news/research-news/plaque-removal-d...
Right, monocausal explanations in-general will set-off my skept-o-sense too; but then my mind made me think of another example: Andrew Wakefield (except that AW succeeded more at convincing Facebook-moms than the scientific establishment - but still harmed society just as much, IMO)
Amyloid deposits correlate with Alzheimer’s, but they do not cause the symptoms. We know this because we have drugs which (in some patients, not approved for general use) completely clear out amyloids, but have no affect on symptoms or outcomes. We have other very promising medications that do nothing to amyloids. We also have tons of people who have had brain autopsies for other reasons and found to have very high levels of amyloid deposits, but no symptoms of dementia prior to death.
Alzheimer’s isn’t caused by amyloids.
I 100% agree with you that we shouldn't throw the baby out with the bathwater on this one. Data being falsified and the hypothesis being wrong are two different things.
The internet is awash in random garbage and it'd be interesting to have a link that someone who actually sees sleep EEGs thinks is "80% there".
Re: Link, just to lower your load in answering.
Anyone who believes that an entire field and decades of researched pivoted entirely around one researcher falsifying data is oversimplifying. The situation was not good, but it’s silly to act like it all came down to this one person and that there wasn’t anything else the industry was using as their basis for allocating research bets.
One thing that AI/ML is really good at is taking very large datasets and finding correlations that you wouldn't otherwise. If everyone's medical chart were in one place, you could find things like "four years before presenting symptoms of pancreatic cancer, patients complain of increased nosebleeds", or things like that.
Of course we don't need universal healthcare to have a chart exchange, and the privacy issues are certainly something that needs consideration.
But the point is, I suspect we could find cures and leading indicators for a lot of diseases if everyone's medical records were available for analysis.
> The study co-authors (from left to right) Sheng Zhong, Junchen Chen, Wenxin Zhao, Ming Xu, Shuanghong Xue, Zhixuan Song and Fatemeh Hadi
>This work is partially funded by the National Institutes of Health (grants R01GM138852, DP1DK126138, UH3CA256960, R01HD107206, R01AG074273 and R01AG078185).
Universal Health Care would be great but we are at a place where the research itself may vanish from the US.
'patient complains of increased nosebleeds' isn't structured data you can query (or feed to ML) like that. It actually takes a physician having this kind of hypothesis, to then trawl through the records, reading unstructured notes, creating their own database for the purpose - you know, had/did not have nosebleed, developed/did not develop pancreatic cancer within 4 years, or whatever - so then they can do the actual analysis on the extracted data.
Where I think LLMs could indeed be very helpful is in this data collection phase: this is the structured data I want, this is the pile of notes, go. (Then you check some small percentage of them and if they're correct assume the rest are too. There's already huge scope for human error here, so this seems acceptable.)
I believe you, but I'm curious how that works. When you go to a random doctor, do they have to request your records from all your other doctors? Similar to here in the USA when you have a PPO?
One, in some of the countries I know (with universal healthcare and no centralised records) you don't go to a random doctor. You have a declared family doctor and you have to go to them unless they are unavailable, in which case the other doctor you go to has to declare that you couldn't go to your doctor. It's a small hurdle to prevent doctor shopping, but it means people are more likely to always see the same doctor. Specialists are given the relevant information by the family doctor when referring a patient to a specialist, and in most other cases records are not really needed, or the ER will contact whoever to get the information they think they need. It might sound hazardous but in practice it works fine.
Second, some places have centrally-stored records but the access is controlled by the patient. Every access to the record is disclosed to the patient and he has the possibility to revoke access to anyone at any time. That generally goes together with laws that fundamentally oppose any automated access or sharing of these records to third parties.
And third, I don't understand what any of this has to do with who whether healthcare access is universal or not? Universal healthcare without centralised records exists (in France, unless it has changed in recent years, but it at least existed for 60 years or so) and centralised records without universal healthcare could exist (maybe privately managed by insurance companies, since the absence of universal healthcare would indicate a pretty disengaged state).
Universal healthcare is about who is paying, not necessarily about who is running the service.
This was somewhat annoying since unlike the UK system, the Australian system is essentially private GPs getting paid for your individual appointments by the government (so called bulk billing), so there's no guarantee that you can go to the same doctor every time.
Basically, government funded and regulated doesn't mean government run.
There is no standardized EHR system here, despite provincial governments (which are who runs the systems) wasting millions over the last two decades trying to make that happen.
This was the last decades way of doing things. The current decade is to stay within the desired charting system. That way you can one-click share data between doctors. Typically you would search for doctors that utilize the same charting platform. EPIC is probably the largest one in US today
Isn't this exactly what HIPAA was supposed to address?
I hope the author of this comment has another area of expertise.
Unfortunately so many junk systems were pushed to the market and the "common charting protocol" is highly dependent on the EHR used by the hospital system.
There _was_ supposed to be some interoperability between EHRs but I honestly haven’t been following it for quite some time.
As for availability of medical history to researchers, I highly doubt this will happen.
Big tech has ruined the trust between people and technology. People gave up their data to G, MS, FB, and others for many years.
We have yet to see any benefit for the common man or woman. Only the data is used against us. Used to divide us (echo chambers). Used to manipulate us (buy THIS, hate that, anti WoKe). Used to control uneducated and vulnerable population. Used to manipulate elections. Used to enrich the billionaire class.
If I had any funding to work freely in these subjects, I would instead focus on the more fundamental questions of computationally mapping and reversing cellular senescence, starting with something tiny and trivial (but perhaps not tiny nor trivial enough) like a rotifer. My focus wouldn't be the biologists' "we want to understand this rotifer", "or we want to understand senescence", but more "can we create an exact computational framework to map senescence, a framework which can be extended and applied to other organisms"?
Sadly, funding for science is a lost cause, because even where/when it is available, it comes with all sort of political and ideological chains.
Researching and curing AD is not barking up the wrong tree. There is a horrible deadly monster in that tree that needs defeating. I hope people also get scientific funding for other age-related issues.
[0] https://www.cell.com/cell/fulltext/S0092-8674(25)00397-6
Sure sounds like it.
...I ask because bio/chem visualization and simulation was a solved problem back in the 1980s (...back when bad TV shows used renders of spinning organic-chemistry hexagons on the protagonist's computer as a visual-metaphore for doing science!).
"AI" in this case was used to generate a 3D model of a protein. Literally, something you can grab from Wikipedia — https://en.m.wikipedia.org/wiki/Phosphoglycerate_dehydrogena...
The underlying work performed by the researchers is much more interesting — https://linkinghub.elsevier.com/retrieve/pii/S00928674250039...
They identified a possible upstream pathway that could help treat disease and build therapeutic treatments for Alzheimer’s.
I don’t know about you all but I’m tired of the AI-mania. At least author didn’t but "blockchain" in the article.
Because there's AI as in "letting ChatGPT do the hard bits of programming or writing for me", for which it is woefully unsuited, and there's AI as in using machine learning as a statistical approach, which it fundamentally is. It's something you can pour data into and let the machine find how the data clump together, so you can investigate potential causative relationships the Mark I eyeball might have missed.
I'm excited for the possibilities these uses of AI might bring.
Because I find myself nodding along with optimism, having two grandfathers that died from this disease. It’d be great if something could sift through all the data and come up with a novel solution.
Then I remember that this is the same technology that eagerly tries to autocomplete every other line of my code to include two nonexistent variables and a nonexistent function.
I hope this field has some good people to sanity check this stuff.
Then I remember that this is the same technology that failed to drive in screws for a project I was working on a week ago."
The AI that's being used in applications like this is not generative AI. It really is just "sparkling statistics" and it's tremendously useful in applications like this because it can accelerate the finding of patterns in data that form the basis of new discoveries.
https://www.youtube.com/watch?v=SXyrYMxa-VI
A paper author did quote the use of AI. But without explaining precisely how AI was used and why it was valuable this article is basically clickbait trash. Was AI necessary for their key result? If so how and why? We don't know!
Everything about this screams "just say AI and we'll get more attention".
I agree the UCSD writeup is pretty misleading; the authors used protein-modeling software, which is really not very interesting, and the fact that the SOTA protein modeler uses machine learning is not at all relevant to this specific paper.
Ah yeah I skimmed and searched for “AI” so missed that. The UCSD article does not contain the term “AlphaFold” so yeah they’re definitely engagement baiting.
It’s a nice reprieve from “we’re using a chatbot as a therapist and it started telling people to kill themselves” type news.
This is a completely normal way to talk about inanimate objects
The human body is a pretty amazing construction, nature doesn't make a lot of mistakes.