alex_suzuki7 hours ago
Immediate nostalgia activated. I ran this on a Pentium machine (I think) at home, still living with my parents. Sometimes I yearn for the optimism and relative naïveté of those times.
estimator72926 hours ago
Pentium 3 in a crusty Compaq with a 5.25" bigfoot hard drive.
Those were the days
sjm-lbm4 hours ago
I still have both the screensaver and the moment that I realized that disabling the screensaver allowed processing to happen meaningfully quicker burned into my mind.
j79an hour ago
Same. Although with the curse of hindsight, I painfully recall choosing to run SETI@home instead of "mining" some weird digital currency called Bitcoin back in 2010. So, painful.
saganus2 hours ago
I was running my K6-2 and I was _convinced_ it was superior to equivalent Intel CPUs.
Spent hours watching the graph hoping to get triplets and some kind of confirmation that I just found ET.
Miss those days so much.
poorman2 hours ago
I was just thinking about this project the other day. Seems we have a whole lot of unused compute (and now GPU). I wish someone would create a meaningful project like this to distribute AI training or something. Imagine underfunded AI researchers being able to distribute work to idle machines like SETI@home did.
wiz21c2 hours ago
Asked Gemini about that: "are there efforts to train big LLM in a distributed fashion à la seti@home ? "
answer was really interesing: - https://github.com/PrimeIntellect-ai/prime - https://www.together.ai/
bpoyner9 hours ago
This paper describes the front end of SETI@home and provides parameters for the primary data source, the Arecibo Observatory
Most of this data was recorded commensally at the Arecibo observatory over a 22 yr period
Interesting as Arecibo collapsed in December of 2020. It sounds like they have a lot of data to still churn through.
PokemonNoGo7 hours ago
>Most radio SETI projects process data in near real-time using special purpose analyzers at the telescope. SETI@home takes a different approach. It records digital time-domain (also called baseband) data, and distributes it over the internet to large numbers of computers that process the data, using both CPUs and GPUs.
Definetly something going on here I'm not following.
>SETI@home is in hiberation. We are no longer distributing tasks. [0]
Is this paper really old or something? I would love to turn on my clients again :D
drb4935 hours ago
The distributed compute part of the project has turned off but data analysis continues.
I know what you mean these types of projects inspired me to contribute as a young citizen scientist.
A different domain, but https://foldingathome.org/ is still running. Using distributed compute to study protein folding.
washedDeveloper4 hours ago
If you are looking for a good list of these types of projects: https://boinc.berkeley.edu/projects.php
alexpotato4 hours ago
Wasn't this largely solved by DeepMind's AlphaFold?
drb4934 hours ago
I'd discourage claiming any biological process is "solved."
But to your point: No--AlphaFold is an amazing machine learning approach to predicting protein structure but Folding@Home is still immensely useful for simulating how proteins fold up over a timescale. They are/will be complimentary methods.
elicash6 hours ago
They went into hibernation, in terms of accepting new inputs, several years ago. They had more data than they could handle and switched to just analyzing existing data and final reports.
elicash8 hours ago
With the final analysis of this project complete, I do wonder if there's a way to bring it back with distributed agents doing the part that was so time-intensive for researchers that they had to kill it.
torcetean hour ago
We used to use computing power to search for ET signals, now we mine bitcoins.
GorbachevyChasean hour ago
That’s probably because that’s what the aliens want us to be doing. They can’t have just everybody snooping around their harvesting operations.
Kalpaka4 hours ago
Something about SETI@home that doesn't get said enough: it didn't just do science, it created a category.
Before it, "distributed computing" meant institutional grids, cluster access, gated systems. SETI@home proved that aggregating idle cycles from millions of ordinary machines was a legitimate scientific method. That proof changed what was possible.
Folding@home came next. BOINC was built to formalize the template. Distributed citizen science became a recognized mode of doing research. None of that path was obvious before SETI@home walked it first.
What's strange is that cheap cloud compute kind of ended this era not by failing but by succeeding. Why donate your CPU when AWS is a credit card away? The economics shifted. But something got lost too — the screensaver running while you slept, the knowledge that your specific machine was doing something real in the world. That personal connection to a distributed effort hasn't really been replicated.
elicash's question is the right one. Could distributed agents revive the model? Maybe. But I suspect the hard part isn't the architecture — it's recreating the feeling that your contribution matters when it's one of ten million.