Chris_Newton12 hours ago
Dependabot has some value IME, but all naïve tools that only check software and version numbers against a vulnerability database tend to be noisy if they don’t then do something else to determine whether your code is actually exposed to a matching vulnerability.
One security checking tool that has genuinely impressed me recently is CodeQL. If you’re using GitHub, you can run this as part of GitHub Advanced Security.
Unlike those naïve tools, CodeQL seems to perform a real tracing analysis through the code, so its report doesn’t just say you have user-provided data being used dangerously, it shows you a complete, step-by-step path through the code that connects the input to the dangerous usage. This provides useful, actionable information to assess and fix real vulnerabilities, and it is inherently resistant to false positives.
Presumably there is still a possibility of false negatives with this approach, particularly with more dynamic languages like Python where you could surely write code that is obfuscated enough to avoid detection by the tracing analysis. However, most of us don’t intentionally do that, and it’s still useful to find the rest of the issues even if the results aren’t perfect and 100% complete.
notepad0x90an hour ago
Agreed, codeql has been amazing. But it's important to not replace type checkers and linters with it. it complements them, it doesn't replace them.
Certain languages don't have enough "rules" (forgot the term) either. This is the only open/free SAST I know of, if there are others I'd be interested as well.
My hope+dream is for Linux distros to require checks like this to pass for anything they admit to their repo.
madarcho8 hours ago
CodeQL was a good help on some projects, but more recently, our team has been increasingly frustrated by the thing to the point of turning it off.
The latest drop in the bucket was a comment adding a useless intermediate variable, with the justification being “if you do this, you’ll avoid CodeQL flagging you for the problem”.
Sounds like slight overfitting to the data!
missingdays3 hours ago
So, CodeQL found a vulnerability in your code, you avoided the warning by adding an intermediate variable (but ignored the vulnerability), and you are frustrated with CodeQL, not the person who added this variable?
mwcz3 hours ago
If I read it correctly, the comment suggesting the intermediate variable was from CodeQL itself.
maltalex5 hours ago
> Dependabot has some value IME, but all naïve tools that only check software and version numbers against a vulnerability database tend to be noisy if they don’t then do something else to determine whether your code is actually exposed to a matching vulnerability.
For non-SaaS products it doesn’t matter. Your customer’s security teams have their own scanners. If you ship them vulnerable binaries, they’ll complain even if the vulnerable code is never used or isn’t exploitable in your product.
Chris_Newton4 hours ago
This is true and customers do a lot of unfortunate things in the name of security theatre. Sometimes you have to play the cards you’ve been dealt and roll with it. However, educating them about why they’re wasting significant amounts of money paying you to deal with non-problems does sometimes work as a mutually beneficial alternative.
bluedino4 hours ago
We had a Python "vulnerability" that only existed on 32-bit platforms, which we don't use in our environment, but do you think we could get the cyber team to understand that?
Nope.
david_allison29 minutes ago
CodeQL has been disappointing with Kotlin, it lagged behind the official releases by about two months, blocking our update to Kotlin 2.3.0
maweki9 hours ago
> it is inherently resistant to false positives
By Rice's Theorem, I somehow doubt that.
summarity9 hours ago
No engine can be 100% perfect of course, the original comment is broadly accurate though. CodeQL builds a full semantic database including types and dataflow from source code, then runs queries against that. QL is fundamentally a logic programming language that is only concerned with the satisfiably of the given constraint.
If dataflow is not provably connected from source to sink, an alert is impossible. If a sanitization step interrupts the flow of potentially tainted data, the alert is similarly discarded.
The end-to-end precision of the detection depends on the queries executed, the models of the libraries used in the code (to e.g., recognize the correct sanitizers), and other parameters. All of this is customizable by users.
All that can be overwhelming though, so we aim to provide sane defaults. On GitHub, you can choose between a "Default" and "Extended" suite. Those are tuned for different levels of potential FN/FP based on the precision of the query and severity of the alert.
Severities are calculated based on the weaknesses the query covers, and the real CVE these have caused in prior disclosed vulnerabilities.
QL-language-focused resources for CodeQL: https://codeql.github.com/
Chris_Newton9 hours ago
Sorry, I don’t understand the point you’re making. If CodeQL reports that you have a XSS vulnerability in your code, and its report includes the complete and specific code path that creates that vulnerability, how is Rice’s theorem applicable here? We’re not talking about decidability of some semantic property in the general case; we’re talking about a specific claim about specific code that is demonstrably true.
everforward6 hours ago
Rice’s theorem applies to any non-trivial semantic property.
Looking at the docs, I’m not really sure CodeQL is semantic in the same sense as Rices theorem. It looks syntactic more than semantic.
Eg breaking Rices theorem would require it to detect that an application isn’t vulnerable if it contains the vulnerability but only in paths that are unreachable. Like
if request.params.limit > 1000:
throw error
# 1000 lines of code
if request.params.limit > 1000:
call_vulnerable_code()
I’m not at a PC right now, but I’d be curious if CodeQL thinks that’s vulnerable or not.It’s probably demonstrably true that there is syntactically a path to the vulnerability, I’m a little dubious that it’s demonstrably true the code path is actually reachable without executing the code.
SkiFire136 hours ago
> We’re not talking about decidability of some semantic property in the general case; we’re talking about a specific claim about specific code
Is CodeQL special cased for your code? I very much doubt that. Then it must work in the general case. At that point decidability is impossible and at best either false positives or false negatives can be guaranteed to be absent, but not both (possibly neither of them!)
I don't doubt CodeQL claims can be demonstrably true, that's still coherent with Rice's theorem. However it does mean you'll have false negatives, that is cases where CodeQL reports no provable claim while your code is vulnerable to some issues.
Chris_Newton4 hours ago
OK, but all I said before was that CodeQL’s approach where it supplies a specific example to support a specific problem report is inherently resistant to false positives.
Clearly it is still possible to generate a false positive if, for example, CodeQL’s algorithm thinks it has found a path through the code where unsanitised user data can be used dangerously, but in fact there was a sanitisation step along the way that it didn’t recognise. This is the kind of situation where the theoretical result about not being able to determine whether a semantic property holds in all cases is felt in practical terms.
It still seems much less likely that an algorithm that needs to produce a specific demonstration of the problem it claims to have found will result in a false positive than the kind of naïve algorithms we were discussing before that are based on a generic look-up table of software+version=vulnerability without any attempt to determine whether there is actually a path to exploit that vulnerability in the real code.
UncleMeat3 hours ago
Rice's Thm just says that you can't have a sound and complete static analysis. You can happily have one or the other.
silverwind10 hours ago
CodeQL seems to raise too many false-positives in my experience. And it seems there is no easy way to run it locally, so it's a vendor lock-in situation.
summarity9 hours ago
Heyo, I'm the Product Director for detection & remediation engines, including CodeQL.
I would love to hear what kind of local experience you're looking for and where CodeQL isn't working well today.
As a general overview:
The CodeQL CLI is developed as an open-source project and can run CodeQL basically anywhere. The engine is free to use for all open-source projects, and free for all security researchers.
The CLI is available as release downloads, in homebrew, and as part of many deployment frameworks: https://github.com/advanced-security/awesome-codeql?tab=read...
Results are stored in standard formats and can be viewed and processed by any SARIF-compatible tool. We provide tools to run CodeQL against thousands of open-source repos for security research.
The repo linked above points to dozens of other useful projects (both from GitHub and the community around CodeQL).
godisdad4 hours ago
The vagaries of the dual licensing discourages a lot of teams working on commercial projects from kicking the tires on CodeQL and generally hinders adoption for private projects as well: are there any plans to change the licensing in the future?
mstade4 hours ago
Nice, I for one didn't know about this. Thanks a bunch for chiming in!
Chris_Newton9 hours ago
CodeQL seems to raise too many false-positives in my experience.
I’d be interested in what kinds of false positives you’ve seen it produce. The functionality in CodeQL that I have found useful tends to accompany each reported vulnerability with a specific code path that demonstrates how the vulnerability arises. While we might still decide there is no risk in practice for other reasons, I don’t recall ever seeing it make a claim like this that was incorrect from a technical perspective. Maybe some of the other types of checks it performs are more susceptible to false positives and I just happen not to have run into those so much in the projects I’ve worked on.
ploxiln4 hours ago
The previous company I was working at (6 months ago) had a bunch of microservices, most in python using fastapi and pydantic. At one point the security team tuned on CodeQL for a bunch of them, and we just got a bunch of false positives for not validating a UUID url path param to a request handler. In fact the parameter was typed in the handler function signature, and fastapi does validate that type. But in this strange case, CodeQL knew that these were external inputs, but didn't know that fastapi would validate that path param type, so it suggested adding redundant type check and bail-out code, in 100s of places.
The patterns we had established were as simple, basic, and "safe" as practical, and we advised and code-reviewed the mechanics of services/apps for the other teams, like using database connections/pools correctly, using async correctly, validating input correctly, etc (while the other teams were more focused on features and business logic). Low-level performance was not really a concern, mostly just high-level db-queries or sub-requests that were too expensive or numerous. The point is, there really wasn't much of anything for CodeQL to find, all the basic blunders were mostly prevented. So, it was pretty much all false-positives.
Of course, the experience would be far different if we were more careless or working with more tricky components/patterns. Compare to the base-rate fallacy from medicine ... if there's a 99% accurate test across a population with nothing for it to find, the "1%" false positive case will dominate.
I also want to mention a tendency for some security teams to decide that their role is to set these things up, turn them on, cover their eyes, and point the hose at the devs. Using these tools makes sense, but these security teams think it's not practical for them to look at the output and judge the quality with their own brains, first. And it's all about the numbers: 80 criticals, 2000 highs! (except they're all the same CVE and they're all not valid for the same reason)
Chris_Newton4 hours ago
Interesting, thanks. In the UUID example you mentioned, it seems the CodeQL model is missing some information about how FastAPI’s runtime validation works and so not drawing correct inferences about the types. It doesn’t seem to have a general problem with tracking request parameters coming into Python web frameworks — in fact, the first thing that really impressed me about CodeQL was how accurate its reports were with some quite old Django code — but there is a lot more emphasis on type annotations and validating input against those types at runtime in FastAPI.
I completely agree about the problem of someone deciding to turn these kinds of scanning tools on and then expecting they’ll Just Work. I do think the better tools can provide a lot of value, but they still involve trade-offs and no tool will get everything 100% right, so there will always be a need to review their output and make intelligent decisions about how to use it. Scanning tools that don’t provide a way to persistently mark a certain result as incorrect or to collect multiple instances of the same issue together tend to be particularly painful to work with.
varispeed11 hours ago
Bumping version of dependencies doesn't guarantee any improved safety as new versions can introduce security issues (otherwise we wouldn't have a need of patching old versions that used to be new).
Chris_Newton8 hours ago
If you replace a dependency that has a known vulnerability with a different dependency that does not, surely that is objectively an improvement in at least that specific respect? Of course we can’t guarantee that it didn’t introduce some other problem as well, but not fixing known problems because of hypothetical unknown problems that might or might not exist doesn’t seem like a great strategy.
gopher_spacean hour ago
I think he's referring to this part of the article:
> Dependencies should be updated according to your development cycle, not the cycle of each of your dependencies. For example you might want to update dependencies all at once when you begin a release development cycle, as opposed to when each dependency completes theirs.
and is arguing in favor of targeted updates.
It might surprise the younger crowd to see the number of Windows Updates you wouldn't have installed on a production machine, back when you made choices at that level. From this perspective Tesla's OTA firmware update scheme seems wildly irresponsible for the car owner.
eru9 hours ago
Maybe. But at least everyone being on the same (new) version makes things simpler, compared to everyone being on different random versions, of what ever used to be current when they were written.
andrewaylett31 minutes ago
I approve of Renovate's distinct recommendations for libraries vs applications.
For a library, you really want the widest range of "allowed" dependencies, but for the library's test suite you want to pin specific versions. I wrote a tool[1] that helps me make sure (for the npm ecosystem) my dependency specifications aren't over-wide.
For an application, you just want pinned specific dependencies. Renovate has a nice feature wherein it'll maintain transitive dependencies, so you can avoid the trap of only upgrading when forced to by more direct dependencies.
The net result is that most version bumps for my library code only affect the test environment, so I'm happy allowing them through if the tests pass. For application code, too, my personal projects will merge version bumps and redeploy automatically -- I only need to review if something breaks. This matches the implicit behaviour I see from most teams anyway, who rely on "manual review" but only actually succeed in adding toil.
My experience is that Renovate's lock file maintenance makes update a whole load safer than the common pattern of having ancient versions of most transitive dependencies then upgrading a thread of packages depended on by a newer version of a single dependency.
nfma day ago
The number of ReDoS vulnerabilities we see in Dependabot alerts for NPM packages we’re only using in client code is absurd. I’d love a fix for this that was aware of whether the package is running on our backend or not. Client side ReDoS is not relevant to us at all.
staticassertion21 hours ago
TBH I Think that DoS needs to stop being considered a vulnerability. It's an availability concern, and availability, despite being a part of CIA, is really more of a principle for security rather than the domain of security. In practice, availability is far better categorized as an operational or engineering concern than a security concern and it does far, far more harm to categorize DoS as a security conern than it does to help.
It's just a silly historical artifact that we treat DoS as special, imo.
jpollock21 hours ago
The severity of the DoS depends on the system being attacked, and how it is configured to behave on failure.
If the system is configured to "fail open", and it's something validating access (say anti-fraud), then the DoS becomes a fraud hole and profitable to exploit. Once discovered, this runs away _really_ quickly.
Treating DoS as affecting availability converts the issue into a "do I want to spend $X from a shakedown, or $Y to avoid being shaken down in the first place?"
Then, "what happens when people find out I pay out on shakedowns?"
staticassertion21 hours ago
If the system "fails open" then it's not a DoS, it's a privilege escalation. What you're describing here is just a matter of threat modeling, which is up to you to perform and not a matter for CVEs. CVEs are local properties, and DoS does not deserve to be a local property that we issue CVEs for.
otabdeveloper414 hours ago
You're making too much sense for a computer security specialist.
michaelt21 hours ago
> If the system is configured to "fail open", and it's something validating access (say anti-fraud),
The problem here isn't the DoS, it's the fail open design.
jpollock20 hours ago
If the majority of your customers are good, failing closed will cost more than the fraud during the anti-fraud system's downtime.
prmoustache6 hours ago
If that is the mindset in your company, why even bother looking for vulnerabilities?
everforward5 hours ago
You are really running with scissors there. If anyone with less scrupulous morals notices, you’re an outage away from being in deep, deep shit.
The best case is having your credit card processing fees like quadruple, and the worst case is being in a regulated industry and having to explain to regulators why you knowingly allowed a ton of transactions with 0 due diligence.
lazyasciiart15 hours ago
Until any bad customer learns about the fail-open.
eru9 hours ago
If bad actors learn about the fail-close, they can conceivably cause you more harm.
gopher_space42 minutes ago
This is a losing money vs. losing freedom situation.
eru9 hours ago
Also in eg C code, many exploits start out would only be a DoS, but can later be turned into a more dangerous attack.
staticassertion5 hours ago
If you're submitting a CVE for a primitive that seems likely to be useful for further exploitation, mark it as such. That's not the case for ReDOS or the vast majority of DoS, it's already largely the case that you'd mark something as "privesc" or "rce" if you believe it provides that capability without necessarily having a full, reliable exploit.
CVEs are at the discretion of the reporter.
vasco14 hours ago
> Treating DoS as affecting availability converts the issue into a "do I want to spend $X from a shakedown, or $Y to avoid being shaken down in the first place?"
> Then, "what happens when people find out I pay out on shakedowns?"
What do you mean? You pay to someone else than who did the DoS. You pay your way out of a DoS by throwing more resources at the problem, both in raw capacity and in network blocking capabilities. So how is that incentivising the attacker? Or did you mean some literal blackmailing??
jpollock12 hours ago
Literal blackmailing, same as ransomware.
bawolff20 hours ago
The real problem is that we treat vulnerabilities as binary without nuance. Whether a security vulnerability is an issue depends on context. This comes up a lot for DoS (and especially ReDoS) as it is comparatively rare for it to be real, but it can happen for any vulnerability type.
jayanmn13 hours ago
Our top management has zero interest in context. There is a chart , that must not have red items.
Security team cannot explain attach surface. In the end it is binary. Fix it or take the blame
staticassertion20 hours ago
I don't really agree. Maybe I do, but I probably have mixed feelings about that at least.
DoS is distinct because it's only considered a "security" issue due to arbitrary conversations that happened decades ago. There's simply not a good justification today for it. If you care about DoS, you care about almost every bug, and this is something for your team to consider for availability.
That is distinct from, say, remote code execution, which not only encompasses DoS but is radically more powerful. I think it's entirely reasonable to say "RCE is wroth calling out as a particularly powerful capability".
I suppose I would put it this way. An API has various guarantees. Some of those guarantees are on "won't crash", or "terminates eventually", but that's actually insanely uncommon and not standard, therefor DoS is sort of pointless. Some of those guarantees are "won't let unauthorized users log in" or "won't give arbitrary code execution", which are guarantees we kind of just want to take for granted because they're so insanely important to the vast majority of users.
I kinda reject the framing that it's impossible to categorize security vulnerabilities broadly without extremely specific threat models, I just think that that's the case for DoS.
There are other issues like "is it real" ie: "is this even exploitable?" and there's perhaps some nuance, and there's issues like "this isn't reachable from my code", etc. But I do think DoS doesn't fall into the nuanced position, it's just flatly an outdated concept.
bawolff18 hours ago
I am kind of sympathetic to that view. In practise i do find most DoS vulns to be noise or at least fundamentally different from other security bugs because worst case you get attacked, have some downtime, and fix it. You dont have to worry about persistence or data leaks.
But at the same time i don't know. Pre-cloudflare bringing cheap ddos mitigation to the masses, i suspect most website operators would have preferred to be subject to an xss attack over a DoS. At least xss has a viable fix path (of course volumetric dos is a different beast than cve type dos vulns)
bigfatkitten15 hours ago
There are good reasons for that history which are still relevant today.
We have decades of history of memory corruption bugs that were initially thought to only result in a DoS, that with a little bit of work on the part of exploit developers have turned into reliable RCE.
staticassertion8 hours ago
I don't believe that's the history here but I could be wrong. The history is that CIA encompasses availability, which it shouldn't.
Regardless, I don't think it matters. If you truly believe your DoS may be a likely privesc etc, label it as those. The system accounts for this. The insanely vast majority of DoS are blatantly not primitives for other exploits.
SAI_Peregrinus2 hours ago
If DoS is a vulnerability, then bad UX is also a vulnerability because it's functionally a DoS if it's bad enough. If users can't use the software it doesn't matter whether they can't because of an attacker or because of the software's inherent unusability.
Lichtso20 hours ago
> I Think that DoS needs to stop being considered a vulnerability
Strongly disagree. While it might not matter much in some / even many domains, it absolutely can be mission critical. Examples are: Guidance and control systems in vehicles and airplanes, industrial processes which need to run uninterrupted, critical infrastructure and medicine / health care.
technion17 hours ago
These redos vulnerabilities always come down to "requires a user input of unbounded length to be passed to a vulnerable regex in JavaScript ". If someone is building a hard real time air plane guidance system they are already not doing this.
I can produce a web server that prints hello world and if you send it enough traffic it will crash. If can put user input into a regex and the response time might go up by 1ms and noone will say its suddenly a valid cve.
Then someone will demonstrate that with a 1mb input string it takes 4ms to respond and claim they've learnt a cve for it. I disagree. If you simply use Web pack youve probably seen a dozen of these where the vulnerable input was inside the Web pack.config.json file. The whole category should go in the bin.
bandrami17 hours ago
> If someone is building a hard real time air plane guidance system they are already not doing this.
But if we no longer classed DOSes as vulnerabilities they might
bregma8 hours ago
These are functional safety problems, not security vulnerabilities.
For a product that requires functional safety, CVEs are almost entirely a marketing tool and irrelevant to the technology. Go ahead and classify them as CVEs, it means the sales people can schmooze with their customer purchasing department folks more but it's not going to affect making your airplane fly or you car drive or your cancer treatment treat any more safely.
staticassertion19 hours ago
I think this is just sort of the wrong framing. Yes, a plane having a DoS is a critical failure. But it's critical at the level where you're considering broader scopes than just the impact of a local bug. I don't think this framing makes any sense for the CVE system. If you're building a plane, who cares about DoS being a CVE? You're way past CVEs. When you're in "DoS is a security/ major boundary" then you're already at the point where CVSS etc are totally irrelevant.
CVEs are helpful for describing the local property of a vulnerability. DOS just isn't interesting in that regard because it's only a security property if you have a very specific threat model, and your threat model isn't that localized (because it's your threat model). That's totally different from RCE, which is virtually always a security property regardless of threat model (unless your system is, say, "aws lambda" where that's the whole point). It's just a total reversal.
171862744010 hours ago
If availability is a security concern, than yes DoS is a security concern, but only in so far as all other bugs that limit availability are too. It is not a security concern per se, regardless of whether availability is a security concern. We don't treat every bug as a security issue.
Well, the Linux Kernel project actually does.
staticassertion2 hours ago
The linux kernel does the opposite, they do not believe in security vulnerabilities. That's why if you mention "security" in a patch, Linus will reject it.
clickety_clack17 hours ago
I just hate being flagged for rubbish in Vanta that is going to cause us the most minor possible issue with our clients because there’s a slight risk they might not be able to access the site for a couple of hours.
akerl_19 hours ago
Maybe we should start issuing CVEs for all bugs that might negatively impact the security of a system.
ranger20718 hours ago
The Linux kernel approach
[deleted]18 hours agocollapsed
kortilla15 hours ago
If I can cause a server to not serve requests to anyone else in the world by sending a well crafted set of bytes, that’s absolutely a vulnerability because it can completely disable critical systems.
If availability isn’t part of CIA then a literal brick fulfills the requirements of security and the entire practice of secure systems is pointless.
junon21 hours ago
I maintain `debug` and the number of nonsense ReDoS vulnerability reports I get (including some with CVEs filed with high CVSS scores, without ever disclosing to me) has made me want to completely pull back from the JS world.
Twirrim18 hours ago
I've been fighting with an AI code review tool about similar issues.
That and it can't understand that a tool that runs as the user on their laptop really doesn't need to sanitise the inputs when it's generating a command. If the user wanted to execute the command they could without having to obfuscate it sufficient to get through the tool. Nope, gotta waste everyone's time running sanitisation methods. Or just ignore the stupid code review tool.
DecoySalamander10 hours ago
There is a plausible scenario in which a user finds some malicious example of cli params for running your command and pasts it in the terminal. You don't have to handle this scenario, but it would be nice to.
adverbly21 hours ago
Seriously!
We also suffer from this. Although in some cases it's due to a Dev dependency. It's crazy how much noise it adds specifically from ReDoS...
monkpit16 hours ago
ReDoS cves in your dev dependencies like playwright that could literally never be exploited, so annoying.
robszumski21 hours ago
Totally hear you on the noise…but we should want to auto-merge vs ignore, no? Given the right tooling of course.
UqWBcuFx6NV4r20 hours ago
We could just skip some steps and I could send you a zip file of malware for you to install on your infra directly if you’d like.
[deleted]17 hours agocollapsed
dotancohen20 hours ago
No
silverwind12 hours ago
ReDoS is a bug in the regex engine. Still, V8 etc. seem to refuse to provide a ReDoS-safe regex engine by default.
ZiiS11 hours ago
Is the possibility to write an infinite loop in your language of choice a bug?
talkin9 hours ago
Most regex usage actually doesnt require near infinite backtracking, so limited unless opted in wouldn’t be that weird.
candiddevmike21 hours ago
Using something like npm-better-audit in your linting/CI allows you exclude devDependencies which cut down a ton of noise for us. IDGAF about vite server vulnerabilities.
ImJasonHa day ago
Govulncheck is one of the Go ecosystem's best features, and that's saying something!
I made a GitHub action that alerts if a PR adds a vulnerable call, which I think pairs nicely with the advice to only actually fix vulnerable calls.
https://github.com/imjasonh/govulncheck-action
You can also just run the stock tool in your GHA, but I liked being able to get annotations and comments in the PR.
Incidentally, the repo has dependabot enabled with auto-merge for those PRs, which is IMO the best you can do for JS codebases.
silverwind12 hours ago
Govulncheck is good, but not without false-positives. Sometimes it raises "unfixable" vulnerabilities and there's still no way to exclude vulnerabilties by CVE number.
ImJasonH11 hours ago
I haven't experienced that (that I know of), do you have an example handy?
apitman21 hours ago
I find dependabot very useful. It's drives me insane and reminds me of the importance of keeping dependencies to an absolute minimum.
mechsy11 hours ago
Absolutely! This is oftentimes my first easy task in the morning to kick things off. For many teams the temptation to let dependencies ‚rot‘ is real, however I have found a reliable way to keep things up-to-date is enabling dependabot and merging relentlessly, releasing often etc.
If your test suite is up to the task you’ll find defects in new updates every now and then, but for me this has even led to some open source contributions, engaging with our dependencies’ maintainers and so on. So I think overall it promotes good practices even though it can be a bit annoying at times.
keyle17 hours ago
I agree, I don't have a ton of projects out there though.
notepad0x90an hour ago
I really think the developer community needs to learn the age-old skill of ignoring things. Don't treat things like dependabot, PRs,stars, issues,etc.. as a metric or quantifier of how good of a job you're doing with your code. Forget that social-drama nonsense.
I think the bigger problem is that Github is being treated as a quasi-social-media, and these things are being viewed as a "thumbs down" or "dislike" (and vice versa). Unless you have an SLA with someone, you don't have to meet any numbers, just do your best when you feel like it, and drive your project best way you think. Just don't be a dick to people about it, or react to these social-media metrics by lashing out against your users or supporters (not claiming that in this case!).
tracker121 hours ago
I kind of wish Dependabot was just another tab you can see when you have contributor access for a repository. The emails are annoying and I mostly filter, but I also don't want a bunch of stale PRs sitting around either... I mean it's useful, but would prefer if it was limited to just the instances where I want to work on these kinds of issues for a couple hours across a few repositories.
BHSPitMonkey21 hours ago
You can add a dependabot.yml config to regulate when Dependabot runs and how many PRs it will open at a time:
https://docs.github.com/en/code-security/reference/supply-ch...
curtisf13 hours ago
Isn't it?
You can have Dependabot enabled, but turn off automatic PRs. You can then manually generate a PR for an auto-fixable issue if you want, or just do the fixes yourself and watch the issue number shrink.
operator-name20 hours ago
The refined github extension[0] has some defaults that make the default view a little more tolerable. Past that I can personally recommend Renovate, which supports far more ecosystems and customisation options (like auto merging).
indiestack21 hours ago
The govulncheck approach (tracing actual code paths to verify vulnerable functions are called) should be the default for every ecosystem, not just Go.
The fundamental problem with Dependabot is that it treats dependency management as a security problem when it's actually a maintenance problem. A vulnerability in a function you never call is not a security issue — it's noise. But Dependabot can't distinguish the two because it operates at the version level, not the call graph level.
For Python projects I've found pip-audit with the --desc flag more useful than Dependabot. It's still version-based, but at least it doesn't create PRs that break your CI at 3am. The real solution is better static analysis that understands reachability, but until that exists for every ecosystem, turning off the noisy tools and doing manual quarterly audits might actually be more secure in practice — because you'll actually read the results instead of auto-merging them.
staticassertion21 hours ago
Part of the problem is that customers will scan your code with these tools and they won't accept "we never call that function" as an answer (and maybe that's rational if they can't verify that that's true). This is where actual security starts to really diverge from the practices we've developed in the name of security.
unshavedyak21 hours ago
Would be neat if the call graph could be asserted easily.. As you could not only validate what vulnerabilities you are / aren't exposed to, but also choose to blacklist some API calls as a form of mitigation. Ensuring you don't accidentally start using something that's proven unsafe.
Gigachad12 hours ago
It’s easier to just update the package and not have to worry.
viraptor20 hours ago
https://bandit.readthedocs.io/en/latest/ can do that for python.
chii14 hours ago
but then if you could assert the call graph (easily, or even provably correctly), then why not just cull the unused code that led to vulnerability in the first place?
mseepgood12 hours ago
With a statically compiled language it is usually culled through dead-code elimination (DCE), and with static linking you don’t ship entire libraries.
chii12 hours ago
The technology to cull code can work for dynamic languages too, even tho it does get difficult sometimes (google closure compiler[1] does dead code elimination for js, for example). It's just that most dynamic language users don't make the attempt (and you end up with this dependabot giving you thousands of false positives due to the deep dependency tree).
fweimer11 hours ago
There is the VEX justification Vulnerable_code_not_in_execute_path. But it's an application-level assertion. I don't think there's a standardized mechanism that can describe this at the component level, from which the application-level assertion could be synthesized. Standardized vulnerability metadata is per component, not per component-to-component relationship. So it's just easier to fix vulnerability.
But I don't quite understand what Dependabot is doing for Go specifically. The vulnerability goes away without source code changes if the dependency is updated from version 1.1.0 to 1.1.1. So anyone building the software (producing an application binary) could just do that, and the intermediate packages would not have to change at all. But it doesn't seem like the standard Go toolchain automates this.
bandrami16 hours ago
If you never call it why is it there?
inejge14 hours ago
It's in the library you're using, and you're not using all of it. I've had that exact situation: a dependency was vulnerable in a very specific set of circumstances which never occurred in my usage, but it got flagged by Dependabot and I received a couple of unnecessary issues.
3forman hour ago
This reminds me that the vulnerability scanner at my company flagged every version of pandas because it has some function in the API that allows to run some equivalent of eval. Thankfully I have the ability to issue a waiver with "does not apply".
solatic10 hours ago
I sympathize with the author, and in principle I find myself nodding along with his prescriptions, but one of the benefits of Dependabot (and Renovate) are that they are language-agnostic. Depending on how many repositories, and how many languages, and upon whom the maintenance burden falls, there's a lot of value to be had. It may not really be feasible to add "the correct" CI workflows to every repository, and the alternative (nothing) inevitably ends up in repositories where dependencies have not been updated in years.
It's good optimization advice, if you have the time, or suffer enough from the described pain points, to apply it.
cedws9 hours ago
I don’t know why the industry collectively accepted these security scanners (code + containers) that don’t even do the most basic of static analysis to see if the vulnerable code is reachable. Companies are breaking their backs trying to maintain a constant zero vulnerabilities in their container images when 99% of the CVEs don’t actually affect them anyway. The kicker is that updating the dependencies probably just introduces new CVEs to be discovered later down the line because most software does not backport fixes.
eru9 hours ago
> The kicker is that updating the dependencies probably just introduces new CVEs to be discovered later down the line because most software does not backport fixes.
I don't understand how the second part of that sentence is connected to the first.
cedws7 hours ago
I could have written it more clearly. If you’re forced to upgrade dependencies to the latest version to get a patch, the upgrade likely contains new unrelated code that adds more CVEs. When fixes are backported you can get the patch knowing you aren’t introducing any new CVEs.
12_throw_away19 hours ago
I'm a little hung up on this part:
> These PRs were accompanied by a security alert with a nonsensical, made up CVSS v4 score and by a worrying 73% compatibility score, allegedly based on the breakage the update is causing in the ecosystem.
Where did the CVSS score come from exactly? Does dependabot generate CVEs automatically?
pornel7 hours ago
CVSS has some formula, but it's a fundamentally flawed concept. It's a score for the worst possible case, not for a typical case. It's for ass-covering, not for being informative about the real risk.
For every boring API you can imagine someone using it for protecting nuclear launch codes, while having it exposed to arbitrary inputs from the internet. If it's technically possible, even if unrealistically stupid, CVSS treats it the same as being a fact, and we get spam about the sky falling due to ReDoS.
This is made worse by GitHub's vulnerability database being quantity-over-quality dumping ground and absolutely zero intelligence in Dependabot (ironic for a company aggressively inserting AI everywhere else)
amluto16 hours ago
I’m kind of curious whether anything is vulnerable to this bug at all. It seems like it depends on calling the offending function incorrectly, which seems about as likely to cause the code using it to unconditionally fail to communicate (and thus have already been fixed) as to fail in a way that’s insecure.
samhclarka day ago
This makes sense to me. I guess I'll start hunting for the equivalent of `govulncheck` for Rust/Cargo.
Separately, I love the idea of the `geomys/sandboxed-step` action, but I've got such an aversion to use anyone else's actions, besides the first-party `actions/*` ones. I'll give sandboxed-step a look, sounds like it would be a nice thing to keep in my toolbox.
LawnGnome3 hours ago
I was actually working on this last week, funnily enough. I've been working on a capability analysis tool for Rust, and if you're already generating a call graph via static analysis, taking that and matching it against the function-level vulnerability data that exists in RustSec isn't that hard.
Hopefully I'll have something out next week.
FiloSottilea day ago
> I've got such an aversion to use anyone else's actions, besides the first-party `actions/*` ones
Yeah, same. FWIW, geomys/sandboxed-step goes out of its way to use the GitHub Immutable Releases to make the git tag hopefully actually immutable.
conradludgatea day ago
LawnGnome3 hours ago
Although, unfortunately, not all RustSec advisories include function-level vulnerability metadata in practice.
bpavuka day ago
> I guess I'll start hunting for the equivalent of `govulncheck` for Rust/Cargo.
how about `cargo-audit`?
mirashii21 hours ago
cargo-audit is not quite at an equivalent level yet, it is lacking the specific features discussed in the post that identify the vulnerable parts of the API surface of a library. cargo-audit is like dependabot and others here in that it only tells you that you're using a version that was vulnerable, not that you're using a specific API that was vulnerable.
hobofan21 hours ago
Saddly, since it relies on a Cargo.lock to be correct it also is affected by bugs that place dependencies in the Cargo.lock, but are not compiled into the binary. e.g. weak features in Cargo currently cause unused dependencies to show up in the Cargo.lock.
esafaka day ago
I automate updates with a cooldown, security scanning, and the usual tests. If it passes all that I don't worry about merging it. When something breaks, it is usually because the tests were not good enough, so I fix them. The next step up would be to deploy the update into a canary cluster and observe it for a while. Better that than accrue tech debt. When you update on "your schedule" you still should do all the above, so why not just make it robust enough to automate? Works for me.
FiloSottilea day ago
For regular updates, because you can minimize but not eliminate risk. As I say in the article that might or might not work for your requirements and practices. For libraries, you also cause compounding churn for your dependents.
For security vulnerabilities, I argue that updating might not be enough! What if your users’ data was compromised? What if your keys should be considered exposed? But the only way to have the bandwidth to do proper triage is by first minimizing false positives.
duskdozer8 hours ago
>For libraries, you also cause compounding churn for your dependents.
This is the thing that I don't really understand but that seems really popular and gaining. The article's section "Test against latest instead of updating" seems like the obvious thing to do, as in, keep a range of compatible versions of dependencies, and only restrict this when necessary, in contrast to deployment- or lockfile-as-requirement which is restricted liberally. Maybe it's just a bigger deal for me because of how disruptive UI changes are.
jackfranklyn10 hours ago
Dependabot works when you have a team that reviews PRs promptly and CI that catches breaking changes. For solo founders and tiny teams, those automated PRs pile up into noise and you stop reviewing them entirely. Then you've got 30 unmerged dependency bumps you're too scared to batch-merge.
What I do instead: monthly calendar reminder, run npm audit, update things that actually matter (security patches, breaking bugs), ignore patch bumps on stable deps. The goal isn't "every dep is always current" - it's "nothing in production has a known vulnerability". Very different targets.
woodruffw20 hours ago
I think this is pretty good advice. I find Dependabot useful for managing scheduled dependency bumps (which in turn is useful for sussing out API changes, including unintended semver breakages from upstreams), but Dependabot’s built-in vulnerability scanning is strictly worse than just about every ecosystem’s own built-in solution.
p1nkpineapple13 hours ago
we struggle with a similar problem at my workplace - vuln alerts from GCP container image scans put a ton of noise into Vanta which screams bloody murder at CVEs in base images which we A) can't fix, and B) aren't relevant as they're not on the hot path (often some random dependency that we don't use in our app).
Are there any tools for handling these kind of CVEs contextually? (Besides migrating all our base images to chainguard/docker hardened images etc)
maciuz13 hours ago
I'm working at a medium sized SaaS vendor. We've been using Aikido Code which tries to filter vulnerability impact using AI. Results are generally positive, though we are still struggling with keeping the amount of CVEs down, due to the size of our code bases and the amount of dependencies.
SahAssar9 hours ago
I'd be weary to trust AI with something like that, especially if I had to assert to a third party that we absolutely do not have a vulnerability.
SamuelAdamsa day ago
What’s nice about Dependabot is that it works across multiple languages and platforms. Is there an equivalent to govulncheck for say NPM or Python?
mirashii21 hours ago
> Is there an equivalent to govulncheck for say NPM or Python?
There never could be, these languages are simply too dynamic.
woodruffw20 hours ago
In practice this isn’t as big of a hurdle as you might expect: Python is fundamentally dynamic, but most non-obfuscated Python is essentially static in terms of callgraph/reachability. That means that “this specific API is vulnerable” is something you can almost always pinpoint usage for in real Python codebases. The bigger problem is actually encoding vulnerable API information (not just vulnerable package ranges) in a way that’s useful and efficient to query.
(Source: I maintain pip-audit, where this has been a long-standing feature request. We’re still mostly in a place of lacking good metadata from vulnerability feeds to enable it.)
caned15 hours ago
The imports themselves may be dynamic. I once did a little review of dependencies in a venv that had everything to run pytorch llama. The number of imports gated by control flow or having a non-constant dependency was nontrivial.
woodruffw15 hours ago
Imports gated by control flow aren’t a huge obstacle, since they’re still statically observable. But yeah, imports that are fully dynamic i.e. use importlib or other import machinery blow a hole in this.
171862744010 hours ago
Idiomatic Python often branches on getattr to implement the interface and that is really hard to analyze from the outside.
woodruffw4 hours ago
I wouldn’t say that’s particularly idiomatic in modern Python. But even when it occurs, it’s not the end of the world: if it’s a computed getattr, you consider the parent object tainted for the purpose of reachability. This is less precise, but it’s equivalent to what the programmer has expressed (and is still more precise than flagging the entire codebase as vulnerable because it uses a dependency.)
mirashii18 hours ago
The thing is that almost always isn't good enough. If it can't prove it, then a human has to be put back in the loop to verify and assert, and on sensitive timelines when you have regulatory requirements on time to acknowledge and resolve CVEs in dependencies.
woodruffw15 hours ago
Sure, but I think the useful question is whether it’s good enough for the median Python codebase. I see the story as similar to that of static typing in Python; Python’s actual types are dynamic and impossible to represent statically with perfect fidelity, but empirically static typing for Python has been very successful. This is because the actual exercised space is much smaller than the set of all valid Python programs.
silverwind12 hours ago
It's definitely possible. Author publishes a list of vulnerable symbols, and if these symbols have no use, your module is not vulnerable. Test coverage analysis tools have been doing such analysis for ages.
danudey20 hours ago
With type hints it's possible for code to assert down the possibilities from "who knows what's what" to "assuming these type hints are correct, this function is never called"; not perfect (until we can statically assert that type hints are correct, which maybe we can idk) but still a pretty good step.
robszumski21 hours ago
I commented elsewhere but our team built a custom static analysis engine for JS/TS specifically for the dep update use-case. It was hard, had to do synthetic execution, understands all the crazy remapping and reexporting you can do, etc. Even then it’s hard to penetrate a complex Express app due to how the tree is built up.
tech221 hours ago
For python maybe pip-audit, and perhaps bandit for a little extra?
It doesn't have the code tracing ability that my sibling is referring to, but it's better than nothing.
mehagara day ago
Is there an equivalent for the JS ecosystem? If not, having Dependabot update dependencies automatically after a cooldown still seems like a better alernative, since you are likely to never update dependencies at all if it's not automatic.
seattle_spring21 hours ago
RenovateBot supports a ton of languages, and ime works much better for the npm ecosystem than Dependabot. Especially true if you use an alternative package manager like yarn/pnpm.
mook21 hours ago
Too bad dependabot cooldowns are brain-dead. If you set a cooldown for one week, and your dependency can't get their act together and makes a release daily, it'll start making PRs for the first (oldest) release in the series after a week even though there's nothing cool about the release cadence.
kleyd21 hours ago
The cooldown is to allow vulnerabilities to be discovered. So auto update on passing tests, which should include an npm audit check.
operator-name20 hours ago
The custom Github Actions approach is very customisable and flexible. In theory you could make and even auto approve bumps.
If you want something more structured, I’ve been playing with and can recommend Renovate (no affiliation). Renovate supports far more ecosystems, has a better community and customisation.
Having tried it I can’t believe how relatively poor Dependabot, the default tool is something we put up with by default. Take something simple like multi layer dockerfiles. This has been a docker features for a while now, yet it’s still silently unsupported by dependabot!
esafak19 hours ago
That's what a lack of competition does. Github is entrenched, complacent.
8bitme6 hours ago
The issue with not updating often enough is that if there is a zero day and you're far enough behind you will be forced to go through the pain of working out how to upgrade to the latest patched version where there may be a painful upgrade path in between
adamdecaf21 hours ago
govulncheck is the much better answer and we use it.
We also let renovate[bot] (similar to dependabot) merge non-major dep updates if tests pass. I hardly notice when deps have small updates.
https://github.com/search?q=org%3Amoov-io+is%3Apr+is%3Amerge...
robszumski21 hours ago
We’ve built a modern dependabot (or works with it) agent: fossabot analyzes your app code to know how you use your dependencies then delivers a custom safe/needs review verdict per upgrade or packages groups of safe upgrades together to make more strategic jumps. We can also fix breaking changes because the agents context is so complete.
https://fossa.com/products/fossabot/
We have some of the best JS/TS analysis out there based on a custom static analysis engine designed for this use-case. You get free credits each month and we’d love feedback on which ecosystems are next…Java, Python?
Totally agree with the author that static analysis like govulncheck is the secret weapon to success with this problem! Dynamic languages are just much harder.
We have a really cool eval framework as well that we’ve blogged about.
MattIPv421 hours ago
Are y'all aware your agent's name clashes with an established and rather popular streaming bot/tool, https://fossabot.com ?
stavros20 hours ago
That would explain why I tried to get vulnerability notifications and instead all my code was streamed to Twitch.
[deleted]20 hours agocollapsed
NewJazz18 hours ago
Spitballing some alt names
Fossadep
Fossacheck
Fossasafe
insin14 hours ago
Fossamatta
Fossahappenin
Fossagoinon
robszumski19 hours ago
example analysis on a Dependabot PR: https://github.com/daniellockard/tiltify-api-client/pull/36#...
[deleted]15 hours agocollapsed
[deleted]16 hours agocollapsed
necubi20 hours ago
Would love to see this for Rust!
AutumnsGarden21 hours ago
I think python and go could be great use cases
snowhalea day ago
govulncheck is so much better for Go projects. it actually traces call paths so you only get alerted if the vulnerable function is reachable from your code. way less noise.
bpavuka day ago
is there a `govulncheck`-like tool for the JVM ecosystem? I heard Gradle has something like that in its ecosystem.
search revealed Sonatype Scan Gradle plugin. how is it?
wpollock18 hours ago
It's been a few years, but for Java I used OWASP: <https://owasp.org/www-project-dependency-check/>, which downloads the NVD (so first run was slow) and scans all dependicies against that. I ran it from maven as part of the build.
arianvanp19 hours ago
At this point your steps are so simple id skip GitHub actions security tyre fire altogether. Just run the go commands whilst listening on GitHub webhooks and updating checks with the GitHub checks API.
GitHub actions is the biggest security risk in this whole setup.
Honestly not that complicated.
NewJazz18 hours ago
I learned recently that self-hosted GHA runners are just VMs your actions have shell access to, and cleanup is on the honor system for the most part.
Absolutely wild.
fulafel13 hours ago
Alert fatigue has been long identified and complained about, this is just a new kind of that. But it's hitting a different set of people.
seg_lola day ago
Be wary of upgrading dependencies too quickly. This is how supply chain incursions are able to spread too quickly. Time is a good firwall.
ImJasonHa day ago
Here's a Go mod proxy-proxy that lets you specify a cooldown, so you never get deps newer than N days/weeks/etc
https://github.com/imjasonh/go-cooldown
It's not running anymore but you get the idea. It should be very easy to deploy anywhere you want.
esafaka day ago
They fixed that last summer: https://github.blog/changelog/2025-07-01-dependabot-supports...
jamietannaa day ago
Yep, and we've had it for a while in Renovate too: https://docs.renovatebot.com/key-concepts/minimum-release-ag...
(I'm a Renovate maintainer)
(I agree with Filippo's post and it can also be applied to Renovate's security updates for Go modules - we don't have a way, right now, of ingesting better data sources like `govulncheck` when raising security PRs)
bityard21 hours ago
A firwall also makes a good firewall, once ignited.
Hamukoa day ago
>Time is a good firwall.
That just reminds me that I got a Dependabot alert for CVE-2026-25727 – "time vulnerable to stack exhaustion Denial of Service attack" – across multiple of my repositories.
literallyroya day ago
The go ecosystem is pretty good about being backwards compatible. Dependabot regular update prs once a week seems like a good option in addition to govulncheck.
NewJazz18 hours ago
Besides go, what languages have this type of fidelity for vulnerability scope. Python? Node? Rust?
focusedmofoa day ago
Is there an equivalent for JS/TS?
maelito5 hours ago
Better : leave github for Codeberg.
atypeoferror5 hours ago
How is this comment in any way relevant to the article or this discussion? Does Codeberg provide static analysis for CVE verification?
hokkos7 hours ago
Most CVE now are pure spam without value, all I get is dev dependencies affected by regex that could take too long, scanner should do a better job to differentiate between dependencies and dev dependencies.
jgalt2126 hours ago
The lead example is about the (*Point).MultiScalarMult method (not a golang person so perhaps wrong terminology).
Instead of, in addition to, updating all your dependencies, perhaps it would be better to emit monkey patches that turn unsafe methods into noops, or raise an exception if such methods are invoked. e.g "paste these lines at the beginning of main to ensure are you not impacted by CVE-2026-XXXX."
KPGv215 hours ago
This is a symptom of JS culture, where people believe you must at all times and in all places have THE latest version of every library, and you MUST NOT wait more than a day to update your entire codebase accordingly.
lazyasciiart15 hours ago
This blog post is entirely about Go, and doesn’t mention JS at all.
TZubiri21 hours ago
Coming from someone with an almost ascetic dependency discipline, I look at some meta-dependencies as an outsider (dependabot, pnpm/yarn, poetry/venv/pipenv, snap/flatpak), a solution to too many dependencies that is yet another dependency, it feels like trying to get out of a hole by digging.
I think that for FOSS the F as in Gratis is always going to be the root cause of security conflicts, if developers are not paid, security is always going to be a problem, you are trying to get something out of nothing otherwise, the accounting equation will not balance, exploiting someone else is precisely the act that leaves you open to exploitation (only according to Nash Game Theory). "158 projects need funding" IS the vector! I'm not saying that JohnDoe/react-openai-redux-widget is going to go rogue, but with what budget are they going to be able to secure their own systems?
My advice is, if it ever comes the point where you need to install dependencies to control your growing dependency graph? consider deleting some dependencies instead.
171862744010 hours ago
> for FOSS the F as in Gratis
Isn't FOSS a combination of the diverging ideas of "Open Source" and "Free Software"? The "Free" in "Free Software" very much does not mean "Gratis".
TZubiri8 hours ago
Yes, it's a joke. The Free in Free Software is sold as being Free as in Freedom to devs by recruiters of the cause, however the bulk of actual consumers see Free Software as equivalent to Open Source and the defining characteristic for them is Free as in Gratis.
17186274407 hours ago
Honestly, that whole "free as in X" problem to me seems like an English only problem. As an ESL I perceive "free" to be the adjective to "freedom" by default and the other meaning to be a contraction of "free of charge".
indiekitai19 hours ago
The core problem is that Dependabot treats dependency graphs as flat lists. It knows you depend on package X, and X has a CVE, so it alerts you. But it has no idea whether you actually call the vulnerable code path.
Go's tooling is exceptional here because the language was designed with this in mind - static analysis can trace exactly which symbols you import and call. govulncheck exploits this to give you meaningful alerts.
The npm ecosystem is even worse because dynamic requires and monkey-patching make static analysis much harder. You end up with dependency scanners that can't distinguish between "this package could theoretically be vulnerable" and "your code calls the vulnerable function."
The irony is that Dependabot's noise makes teams less secure, not more. When every PR has 12 security alerts, people stop reading them. Alert fatigue is a real attack surface.
newzino20 hours ago
The part that kills me is the compliance side. SOC2 audits and enterprise security reviews treat "open Dependabot alerts" as a metric. So teams merge dependency bumps they don't understand just to get the count to zero before the next audit. That's actively worse for security than ignoring the alerts.
govulncheck solves this if your auditor understands it. But most third-party security questionnaires still ask "how do you handle dependency vulnerabilities?" and expect the answer to involve automated patching. Explaining that you run static analysis for symbol reachability and only update when actually affected is a harder sell than "we merge Dependabot PRs within 48 hours."
clarabennett265 hours ago
[dead]
aswihart20 hours ago
> Dependencies should be updated according to your development cycle, not the cycle of each of your dependencies. For example you might want to update dependencies all at once when you begin a release development cycle, as opposed to when each dependency completes theirs.
We're in this space and our approach was to supplement Dependabot rather than replace it. Our app (https://www.infield.ai) focuses more on the project management and team coordination aspect of dependency management. We break upgrade work down into three swim lanes: a) individual upgrades that are required in order to address a known security vulnerability (reactive, most addressed by Dependabot) b) medium-priority upgrades due to staleness or abandonedness, and c) framework upgrades that may take several months to complete, like upgrading Rails or Django. Our software helps you prioritize the work in each of these buckets, record what work has been done, and track your libyear over time so you can manage your maintenance rotation.