hodgehog112 days ago
It's good to see that at least one tech company is interested in using machine learning for scientific research. You know, research that plausibly benefits humanity rather than providing a tool for students to cheat with.
Several colleagues of mine have had to switch out of scientific machine learning as a discipline because the funding just isn't there anymore. All the money is in generic LLM research and generating pictures slightly better.
Jach2 days ago
Did you notice this? https://openai.com/index/accelerating-life-sciences-research...
hodgehog112 days ago
No, thank you for sharing. At first glance, I would argue this is still along the lines of what DeepMind has already done, and unlike DeepMind, they don't seem to care to engage with the communities that have been involved with this for a long time. But still, this suggests scientific machine learning is not abandoned by OpenAI, and maybe some others that I have missed. Hopefully there is a change in the winds over the next few years!
dist-epoch2 days ago
[flagged]
bubblyworld2 days ago
People are allowed to change their minds, or even (god forbid) have contradictory and/or nuanced beliefs.
conartist62 days ago
The headline is borderline offensive in what it wink-wink suggests. The content is just about normal boring stuff engineers deal with --vibration damping.
macleginn2 days ago
But the solution -- using reinforcement learning -- is arguably novel and AI-related. (And also less deterministic?)
conartist62 days ago
Yeah, totally fine. Once I read past the headline I was fine with all of it. It's just egregious clickbait that is actively misleading until you click through
jebarker2 days ago
I don’t see what’s misleading. Is it that people read “perceive” to mean “understand”? The headline seems like a reasonable simplification of the actual work to me.
conartist62 days ago
With none of the context, as is the case before you click a headline, it makes sounds like they're claiming ChatGPT is a philosopher.
jebarker2 days ago
Thanks for explaining. I think it is the interpretation of “perceive” that does that. When I read perceive I think about sensors and its prior to any interpretation, but I guess that’s not how everyone (most?) people read it.
therealpygon2 days ago
None of those things quite fit the definition of perceiving, or “becoming aware of” something, unless you stretch the definition of “awareness”. However, by technical definition, if AI assists in us being able to see deeper into space, then the title is accurate. But, I have to agree that it is a bit ambiguous for a title, but as they say, being technically right is still right.
yosito2 days ago
I hate that both this kind of machine learning applied to scientific research and consumer focused LLMs are both called "AI", that neither is "intelligent" and that consumers don't know the difference.
molticrystal2 days ago
Well the term Artificial Intelligence came from 1955 conference entitled "The Dartmouth Summer Research Project on Artificial Intelligence".
To quote their purpose:
>The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.
While you may argue it is not intelligent, it is certainly AI, which is anything in the last 70 years utilizing a machine that could be considered an incremental steps towards simulating intelligence and learning.
card_zero2 days ago
... by people working on AI, and suckers.
This is "it's just an engineering problem, we just have to follow the roadmap", except the roadmap is illegible and the incremental steps noodle around and lead somewhere else.
ben_w2 days ago
> This is "it's just an engineering problem, we just have to follow the roadmap",
No, this is "it's a science problem". All this:
> except the roadmap is illegible and the incremental steps noodle around and lead somewhere else.
is what makes it science rather than engineering.
card_zero2 days ago
I mean, thinking about it a lot and trying stuff out is good, but you can't claim anything you tried was a step toward the eventual vital insight, except retrospectively. It's not incremental like a progress bar, it's more like a spinner. Maybe something meaningful is going on, maybe not.
auggierose2 days ago
I'd say if you are doing proper science, all your steps are towards the eventual vital insight. Many of the steps may turn out to lead down the wrong lane, but you cannot know that in advance. A simplified way to view this: If you are searching for a certain node in a graph, visiting wrong nodes in the process cannot be avoided, and of course is part of finding the right node.
From the outside though, it is tough to decide if somebody is doing proper science. Maybe they are just doing nonsense. Following a hunch or an intuition may look like nonsense from the outside, though.
card_zero2 days ago
But (connecting this back to the start of the thread) then you can say things like "controlled nuclear fusion can in principle be achieved, therefore my experiments in cold fusion in a test tube are an incremental step toward it, therefore I am actually doing fusion, gib money".
auggierose2 days ago
First, nobody is obliged to give you money. You'll need to convince them first.
Second, not sure what you are saying exactly, do you think "experiments in cold fusion in a test tube" are a step forward for science? Do you think a serious scientist would believe that?
As I said, playing science, and doing proper science, are two entirely different things, but hard to distinguish from the outside.
card_zero19 hours ago
Back in 1989 they (Martin Fleischmann and Stanley Pons) had a hunch, spent a lot of their own money, and did some experiments. Others couldn't replicate it. That much is a step forward by "visiting wrong nodes" as you put it, trying out a dead end.
Leaving money out of it, my point is that they weren't doing fusion, they were doing fusion research. Their device was for fusion, but it was not a working fusion device. Similarly, the software of AI researchers is not working AI software, and they are not doing AI, apart from semantic shift where we call it AI now anyway and created the term AGI to replace the former meaning.
It's not correct to say that an experiment, with the intent of finding out how to do a thing, is equal to the goal. It's a step.
Calling it "incremental" is misleading since all steps are incremental, and assuming you're doggedly determined and exit blind alleys and circles, you will eventually arrive, if the destination exists. But "incremental" suggests you know the distance and know how far there is to go, or at least can put a bound on it, and know in some sense which way. Like the whole thing is planned.
So saying that AI "is anything in the last 70 years utilizing a machine that could be considered an incremental steps towards [AI]" is misleading, in both those ways. The process is not the goal, and the goal is not being approached at a known rate.
dumpsterdiver2 days ago
Now say what you just said in a really excited TV announcer voice, as if you’re really excited to find out, and boom - science.
merelysounds2 days ago
If it helps, it is not a new thing - we’ve experienced that with e.g. “cloud” before (and “ajax”, “blockchain”, “metaverse”, etc). Eventually buzzwords fall out of fashion; although they do get replaced by new ones.
magicmicah852 days ago
AI is just a broader term. It's like saying "we used computers". Consumers also don't need to know the difference, but a compsci major should.