Hey HN, we are Max, Kieran, and Aahel from Midship (https://midship.ai). Midship makes it easy to extract data from unstructured documents like pdfs and images.
Here’s a video showing it in action: https://www.loom.com/share/ae43b6abfcc24e5b82c87104339f2625?..., and a demo playground (no signup required!) to test it out: https://app.midship.ai/demo
We started 5 months ago initially trying to make an AI natural language workflow builder that would be a simpler alternative to Zapier or Make.com. However, most of our users seemed to be much more interested in the basic (and not very good) document extraction feature we had. Seeing how people were spending hours a day manually extracting data from pdfs inspired us to build what has become Midship!
The problem is that despite all our progress in software, huge amounts of business data still lives in PDFs and images. Sure, you can OCR them, but getting clean, structured data out is still painful. Most existing tools just give you a blob of markdown - leaving you to figure out which parts matter and how they relate.
We've found that combining OCR with language models lets us do something more useful: extract specific fields and tables that users actually care about. The LLMs help correct OCR mistakes and understand context (like knowing that "Inv#" and "Invoice Number" mean the same thing).
We have two main kinds of users today, non-technical users that extract data via our web app and developers who use our extraction api. We were initially focused on the first one as they seemed like an underserved part of the market, but we’ve received a lot of interest from developers who face the same issues.
For pricing, we currently charge a monthly Saas fee per seat for the web app and a volume based pricing for the API.
We’re really excited to share what we’ve built so far and look forward to any feedback from the community!
ctippett4 hours ago
Congrats on the launch. I just sent y'all an email – I'm curious with what you can do with airline crew rosters.
monkeydust5 hours ago
Heres a real world use case, our company has moved our pension provider. This provider like the old one sucks at providing me with a good way to navigate through the 120 funds I can invest in.
I want to create something that can paginate through 12 pages of html, perform clicks, download pdf fund factsheet, extract data from this factsheet into excel or CSV. Can this help? What's the best way to deal with the initial task of automating webpage interactions systematically?
navaed014 minutes ago
You should check out Twin
_hfqaan hour ago
Have you looked into tools https://www.multion.ai or https://www.browserbase.com.
maxmaioop4 hours ago
This is an interesting use case! We've heard similar stories from people dealing with pensions. Today we are ready to solve out of the box the extract data from a factsheet into excel or CSV step. Shoot me an email at [email protected]!
prithvi24an hour ago
Whats pricing look like with HIPAA compliance?
serjester6 hours ago
Honest question but how do you see your business being affected as foundational models improve? While I have massive complaints about them, Gemini + structured outputs is working remarkably well for this internally and it's only getting better. It's also an order of magnitude cheaper than anything I've seen commercially.
arjvik3 hours ago
Curious - have you compared Gemini against Anthropic and OpenAI’s offerings here? Am needing to do something similar for a one-off task and simply need to choose a model to use.
serjester2 hours ago
Gemini is an awful developer experience but accuracy for OCR tasks is close to perfect. The pricing is also basically unbeatable - works out to 1k 10k pages per dollar depending on the model. OpenAI has subtle hallucinations and I haven’t profiled Anthropic.
arjvik11 minutes ago
Thanks!
maxmaioop5 hours ago
We're excited for foundational models to improve because we hope it will unlock a lot more use cases. Things like analysis after extraction, able to accurately extract extremely complex documents, etc!
ivanvanderbyl7 hours ago
Congrats on the launch!
I’m curious to hear more about your pivot from AI workflow builder to document parsing. I can see correlations there, but that original idea seems like a much larger opportunity than parsing PDFs to tables in what is an already very crowded space. What verticals did you find have this problem specifically that gave you enough conviction to pivot?
maxmaioop6 hours ago
We saw initial traction with real estate firms extracting property data like rent rolls. But we've also seen traction in other verticals like accounting and intake forms. The original idea was very ambitious and when talking to potential customers they all seemed to be happy with the existing players.
zh24088 hours ago
Saw reducto released benchmark related to your product: https://reducto.ai/blog/rd-tablebench Curious your take on the benchmark and how well midship performs
maxmaioop8 hours ago
The reducto guys are great! Their benchmark is not exactly how we would index our product because we extract into a user specified template vs. extracting into markdown (wysiwyg). That being said their eval aligns with our internal findings of commercial OCR offerings.
nostrebored8 hours ago
How does your accuracy compare with VLMs like ColFlor and ColPali?
kietay8 hours ago
We think about accuracy in 2 ways
Firstly as a function of the independent components in our pipeline. For example, we rely on commercial models for document layout and character recognition. We evaluate each of these and select the highest accuracy, then fine-tune where required.
Secondly we evaluate accuracy per customer. This is because however good the individual compenents are, if the model "misinterprets" a single column, every row of data will be wrong in some way. This is more difficult to put a top level number on and something we're still working on scaling on a per-customer basis, but much easier to do when the customer has historic extractions they have done by hand.
tlofreso9 hours ago
Congrats on the launch... You're in a crowded space. What differentiates Midship? What are you doing that's novel?
kietay8 hours ago
Cofounder here.
Great Q - there is definitely a lot of competition in dev tool offerings but less so in end to end experiences for non technical users.
Some of the things we offer above and beyond dev tools: 1. Schema building to define “what data to extract” 2. A hosted web app to review, audit and export extracted data 3. Integrations into downstream applications like spreadsheets
Outside of those user facing pieces, the biggest engineering effort for us has been in dealing with very complex inputs, like 100+ page PDFs. Just dumping into ChatGPT and asking nicely for the structured data falls over in both obvious (# input/output tokens exceeded) and subtle ways (e.g. missing a row in the middle of the extraction).
seany628 hours ago
Are users able to export their organized data?
maxmaioop8 hours ago
Yes today we support exports to csv or excel from our web app!
hk13378 hours ago
This is interesting.
Can you do this with emails?
Brajeshwar7 minutes ago
A friend seems to be doing it for email - https://dwata.com They are early but is promising
maxmaioop8 hours ago
We currently support pdf, docx, most image types (jpeg/jpg, png, heic), and excel.
saving the email as a pdf would work!
tlofreso8 hours ago
And if yes, be specific in answering. Emails are a bear! Emails can have several file types as attachemtns. Including: Other emails, Zip files, in-line images where position matters for context.
hubraumhugo8 hours ago
Congrats on the launch! A quick search in the YC startup directory brought up 5-10 companies doing pretty much the same thing:
- https://www.ycombinator.com/companies/tableflow
- https://www.ycombinator.com/companies/reducto
- https://www.ycombinator.com/companies/mindee
- https://www.ycombinator.com/companies/omniai
- https://www.ycombinator.com/companies/trellis
At the same time, accurate document extraction is becoming a commodity with powerful VLMs. Are you planning to focus on a specific industry, or how do you plan to differentiate?
maxmaioop8 hours ago
Yes there is definitely a boom in document related startups. We see our niche as focusing on non technical users. We have focused on making it easy to build schemas, an audit and review experience, and integrating into downstream applications.
[deleted]8 hours agocollapsed
[deleted]8 hours agocollapsed
hermitcrab3 hours ago
Do you know if there any good (pref C++) libraries for extracting data tables from PDFs?
tlofreso8 hours ago
"accurate document extraction is becoming a commodity with powerful VLMs"
Agree.
The capability is fairly trivial for orgs with decent technical talent. The tech / processes all look similar:
User uploads file --> Azure prebuilt-layout returns .MD --> prompt + .MD + schema set to LLM --> JSON returned. Do whatever you want with it.
kietay7 hours ago
Totally agree that this is becoming the standard "reference architecture" for this kind of pipeline. The only thing that complicates this a lot today is complex inputs. For simple 1-2 page PDFs what you describes works quite well out of the box but for 100+ page doc it starts to fall over in ways I described in another comment.
tlofreso7 hours ago
Are really large inputs solved at midship? If so, I'd consider that a differentiator (at least today). The demo's limited to 15pgs, and I don't see any marketing around long-context or complex inputs on the site.
I suspect this problem gets solved in the next iteration or two of commodity models. In the meantime, being smart about how the context gets divvied works ok.
I do like the UI you appear to have for citing information. Drawing the polygons around the data, and then where they appear in the PDF. Nice.
Kiro6 hours ago
Why all those steps? Why not just file + prompt to JSON directly?
tlofreso5 hours ago
Having the text (for now) is still pretty important for quality output. The vision models are quite good, but not a replacement for a quality OCR step. A combination of Text + Vision is compelling too.
mitchpatin8 hours ago
TableFlow co-founder here - I don't want to distract from the Midship launch (congrats!) but did want to add my 2 cents.
We see a ton of industries/use-cases still bogged down by manual workflows that start with data extraction. These are often large companies throwing many people at the issue ($$). The vast majority of these companies lack technical teams required to leverage VLMs directly (or at least the desire to manage their own software). There’s a ton of room for tailored solutions here, and I don't think it's a winner-take-all space.
maxmaioop7 hours ago
+1 to what mitch said. We believe there is a large market for non-technical users who can now automate extraction tasks but do not know how to interact with apis. Midship is another option for them that requires 0 programming!
erulabs7 hours ago
Execution is everything. Not to drop a link in someone else’s HN launch but I’m building https://therapy-forms.com and these guys are way ahead of me on UI, polish, and probably overall quality. I do think there’s plenty of slightly different niches here, but even if there were not, execution is everything. Heck it’s likely I’ll wind up as a midship customer, my spare time to fiddle with OCR models is desperately limited and all I want to do is sell to clinics.
_hfqa41 minutes ago
Just a heads up, but I tried to signup but the button doesn't seem to work.
magamanlegends5 hours ago
[dead]