cullenking15 hours ago
We do something similar for some limited geospatial search using elastic search. We make a set of h3 indexes for each of the hundreds of millions of gps recordings on our service, and store them in elastic search. Geospatial queries become full text search queries, where a point is on the line if the set of h3 indexes contains the point. You can do queries on how many cells overlap, which lets you match geospatial tracks on the same paths, and with ES coverage queries, you can tune how much overlap you want.
Instead of using integers IDs for the hexes, we created an encoded version of the ID that has the property that removing a character gets you the containing parent of the cell. This means we can do basic containment queries by querying with a low resolution hex (short string) as a prefix query. If a gps track goes through this larger parent cell, the track will have hexes with the same prefix. You don’t get perfect control of distances because hexes have varying diameters (or rather the approximation, since they aren’t circles they are hexes), but in practice and at scale for a product that doesn’t require high precision, it’s very effective.
I think at the end of this year we’ll have about 6tb of these hex sets in a four node 8 process ES cluster. Performance is pretty good. Also acts as our full text search. Half the time we want a geo search we also want keyword / filtering / etc on the metadata of these trips.
Pretty fun system to build, and the concept works with a wide variety of data stores. Felt like a total hack job but it has stood the test of time.
Thanks uber, h3 is a great library!
jillesvangurp15 hours ago
Elastisearch and Opensearch have a built in geo_shape type that is a bit more optimal for queries like this.
Before that existed (pre 1.0 actually), I did something similar with geohashes, which are similar to h3 but based on simple string encoded quad trees. I indexed all the street segments in openstreetmap with that (~800 million at the time) and implemented a simple reverse geocoder. Worked shockingly well.
The geo_shape type uses a bkd tree in binary format. It's heavily optimized for this type of intersects/overlaps queries at scale. Basically does the same thing but using a lot less disk space and memory. It's similar to what you would find in proper GIS databases. Elasticsearch/opensearch also support h3 and geohash grid aggregations on top of geo_shape or geo_point types.
I'm guessing the author is using something like postgresql which of course has similar geospatial indexing support via post gis.
cullenking6 hours ago
Doesn’t meet all our product requirements unfortunately. We used returned hexes in certain queries, and we also hacked in directionality of line using least significant 12 bits of the hex (didn’t need that level of hex precision), and we are doing direction oriented matching and counting. For simpler use cases it’s definitely a better option. thanks for reminding me and other people reading my comment!
anacoluthe13 hours ago
Beware that the parent hexagon does not contain its children...
penteract12 hours ago
No idea if they are doing this, but you can use Gosper islands (https://en.wikipedia.org/wiki/Gosper_curve) which are close to hexagons, but can be exactly decomposed into 7 smaller copies.
nextaccountic9 hours ago
Can Gosper islands tile the sphere though?
chrisweekly11 hours ago
Awesome comment, thanks for sharing the details. I love this kind of pragmatic optimization. Also, one dev's "total hack* job" [e.g. yourself, in the past] is another's stroke of genius.
* I'd frame it as "kludge", reserving "hack" for the positive HN sense. :)
ajfriend15 hours ago
Very cool! And the prefix queries you mention are what I was trying to get at in another comment, but you explained it better :)
freakynit14 hours ago
Does this effect writes negatively?
cullenking6 hours ago
Not any differently than another indexed text field
jandrewrogers17 hours ago
There is a lot of literature on join operations using discrete global grid systems (DGGS). H3 is a widely used DGGS optimized for visualization.
If joins are a critical performance-sensitive operation, the most important property of a DGGS is congruency. H3 is not congruent it was optimized for visualization, where congruency doesn’t matter, rather than analytical computation. For example, the article talks about deduplication, which is not even necessary with a congruent DGGS. You can do joins with H3 but it is not recommended as a general rule unless the data is small such that you can afford to brute-force it to some extent.
H3 is great for doing point geometry aggregates. It shines at that. Not so much geospatial joins though. DGGS optimized for analytic computation (and joins by implication) exist, they just aren’t optimal for trivial visualization.
0xfaded16 hours ago
S2 has this property https://s2geometry.io
jandrewrogers16 hours ago
Yes, S2 is a congruent DGGS. Unfortunately, it kind of straddles the analytics and visualization property space, being not particularly good at either. It made some design choices, like being projective, that limit its generality as an analytic DGGS. In fairness, its objectives were considerably more limited when it was created. The potential use cases have changed since then.
mdasen16 hours ago
Is there a congruent DGGS that you would recommend?
jandrewrogers15 hours ago
None that are well-documented publicly. There are a multitude of DGGS, often obscure, and they are often designed to satisfy specific applications. Most don’t have a public specification but they are easy to design.
If the objective is to overfit for high-performance scalable analytics, including congruency, the most capable DGGS designs are constructed by embedding a 2-spheroid in a synthetic Euclidean 3-space. The metric for the synthetic 3-space is usually defined to be both binary and as a whole multiple of meters. The main objection is that it is not an “equal area” DGGS, so not good for a pretty graphic, but it is trivially projected into it as needed so it doesn’t matter that much. The main knobs you might care about is the spatial resolution and how far the 3-space extends e.g. it is common to include low-earth orbit in the addressable space.
I was working with a few countries on standardizing one such design but we never got it over the line. There is quite a bit of literature on this, but few people read it and most of it is focused on visualization rather than analytic applications.
srean12 hours ago
Pointers to the literature please. I don't work in this space but love geometry.
ajfriend16 hours ago
I agree that the lack of congruency in H3 hexagons can cause weird overlaps and gaps if you plot mixed resolutions naively, but there are some workarounds that work pretty well in practice. For example, if you have mixed resolutions from compacted H3 cells but a single “logical” target resolution underneath, you can plot the coarser cells not with their native geometry, but using the outline of their children. When you do that, there are no gaps. (Totally unrelated but fun: that shape is a fractal sometimes called a "flowsnake" or a "Gosper Island" (https://en.wikipedia.org/wiki/Gosper_curve), which predates H3 by decades.)
That said, this feels like an issue with rendering geometry rather than with the index itself. I’m curious to hear more about why you think the lack of congruency affects H3’s performance for spatial joins. Under the hood, it’s still a parent–child hierarchy very similar to S2’s — H3 children are topological rather than geometric children (even though they still mostly overlap).
jandrewrogers15 hours ago
In a highly optimized system, spatial joins are often bandwidth bound.
Congruency allows for much more efficient join schedules and maximizes selectivity. This minimizes data motion, which is particularly important as data becomes large. Congruent shards also tend to be more computationally efficient generally, which does add up.
The other important aspect not raised here, is that congruent DGGS have much more scalable performance when using them to build online indexes during ingestion. This follows from them being much more concurrency friendly.
ajfriend13 hours ago
I appreciate the reply! So, I might be wrong here, but I think we may be talking about two different layers. I’m also not very familiar with the literature, so I’d be interested if you could point me to relevant work or explain where my understanding is off.
To me, the big selling point of H3 is that once you’re "in the H3 system", many operations don’t need to worry about geometry at all. Everything is discrete. H3 cells are nodes in a tree with prefixes that can be exploited, and geometry or congruency never really enter the picture at this layer.
Where geometry and congruency do come in is when you translate continuous data (points, polygons, and so on) into H3. In that scenario, I can totally see congruency being a useful property for speed, and that H3 is probably slower than systems that are optimized for that conversion step.
However, in most applications I’ve seen, the continuous-to-H3 conversion happens upstream, or at least isn’t the bottleneck. The primary task is usually operating on already "hexagonified" data, such as joins or other set operations on discrete cell IDs.
Am I understanding the bottleneck correctly?
jandrewrogers3 hours ago
A DGGS is a specialized spatial unit system that, depending on the design, allows you to elide expensive computation for a select set of operations with the tradeoff that other operations become much more expensive.
H3 is optimized for equal-area point aggregates. Congruency does not matter for these aggregates because there is only a single resolution. To your point, in H3 these are implemented as simple scalar counting aggregates -- little computational geometry required. Optimized implementations can generate these aggregates more or less at the speed of memory bandwidth. Ideal for building heat maps!
H3 works reasonably for sharding spatial joins if all of the cell resolutions have the same size and are therefore disjoint. The number of records per cell can be highly variable so this is still suboptimal; adjusting the cell size to get better distribution just moves the suboptimality around. There is also the complexity if polygon data is involved.
The singular importance of congruence as a property is that it enables efficient and uniform sharding of spatial data for distributed indexes, regardless of data distribution or geometry size. The practical benefits follow from efficient and scalable computation over data stored in cells of different size, especially for non-point geometry.
Some DGGS optimized for equal-area point aggregates are congruent, such as HEALPix[0]. However, that congruency comes at high computational cost and unreasonably difficult technical implementation. Not recommended for geospatial use cases.
Congruence has an important challenge that most overlook: geometric relationships on a 2-spheroid can only be approximated on a discrete computer. If you are not careful, quantization to the discrete during computation can effectively create tiny gaps between cells or tiny slivers of overlap. I've seen bugs in the wild from when the rare point lands in one of these non-congruent slivers. Mitigating this can be costly.
This is how we end up with DGGS that embed the 2-spheroid in a synthetic Euclidean 3-space. Quantization issues on the 2-spheroid become trivial in 3-space. People tend to hate two things about these DGGS designs though, neither of which is a technical critique. First, these are not equal area designs like H3; cell size does not indicate anything about the area on the 2-sphere. Since they are efficiently congruent, the resolution can be locally scaled as needed so there are no technical ramifications. It just isn't intuitive like tiling a map or globe. Second, if you do project the cell boundaries onto the 2-sphere and then project that geometry into something like Web Mercator for visualization, it looks like some kind of insane psychedelic hallucination. These cells are designed for analytic processing, not visualization; the data itself is usually WGS84 and can be displayed in exactly the same way you would if you were using PostGIS, the DGGS just doesn't act as a trivial built-in visualization framework.
Taking data stored in a 3-space embedding and converting it to H3-ified data or aggregates on demand is simple, efficient, and highly scalable. I often do things this way even when the data will only ever be visualized in H3 because it scales better.
TacticalCoder14 hours ago
> If joins are a critical performance-sensitive operation, the most important property of a DGGS is congruency.
Not familiar with geo stuff / DGGS. Is H3 not congruent because hexagons, unlike squares or triangles, do not tile the plane perfectly?
I mean: could a system using hexagons ever be congruent?
urschrei10 hours ago
Hexagons do tile the Euclidean plane perfectly. They are the largest of the three n-gons that do so.
rockinghighan hour ago
That's not true when tiling the Earth though. You need 12 pentagons to close the shape on every zoom level, you can't tile the Earth with just hexagons. That's also why footballs stitch together pentagons and hexagons.
dgsan16 hours ago
I don't like what scrolling this site does to my browser history.
stevemk14ebr3 hours ago
Truly trash blog design
nmstoker16 hours ago
Yes, noticed that too. Blocked me getting back to HN. Bad behaviour from the site.
markstos6 hours ago
There are some competing grid systems with similar features and benefits as well.
Notably A5, which has the property that each cell covers exactly the same area, even when stretched towards the north and south pole. Useful for certain spatial analysis where you need every cell to have the same size.
jandrewrogers3 hours ago
That is a nicely designed DGGS, a lot of attention paid to the details. I hadn't seen it before.
markstos3 hours ago
The author of A5 was recently featured on the Mapscaping podcast: https://mapscaping.com/podcast/a5-pentagons-are-the-new-best...
mattforrest2 hours ago
I wrote a post about using H3 or any DGGS for that matter. Yes it speeds things up but you loose accuracy. If search is the primary concern it can help but if any level of accuracy matters I would just use a better engine with GeoParquet to handle it. https://sedona.apache.org/latest/blog/2025/09/05/should-you-...
geophile4 hours ago
Z-order based indexes avoid the resolution problem. Basically:
- Generate z-values for spatial objects. Points -> a single z-value at the highest resolution of the space. Non-points -> multiple z-values. Each z-value is represented by a single integer, (I use 64 bit z-values, which provide for space resolution of 56 bits.) Each integer represents a 1-d range. E.g. 0x123 would represent 0x123000 through 0x123fff
- Spatial join is basically a merge of these z-values. If you are joining one spatial object with a collection of N spatial objects, the time is logN. If you are joining two collections, then it's more of a linear-time merge.
For more information: PROBE Spatial Data Modeling and Query Processing in an Image Database Application. IEEE Trans. Software Eng. 14(5): 611-629 (1988)
An open source java implementation: https://github.com/geophile/geophile. (The documentation includes a number of corrections to the published algorithm.)
febed16 hours ago
Wouldn’t having a spatial index give you most of the performance gains talked about here without needing H3?
feverzsj14 hours ago
Yes. And it should be faster. They may forget to create spatial index.
twelvechairs13 hours ago
Agree with this. They are re-solving a problem that has been solved better by others before (with R-trees).
They may well be using some data storage where spatial indexing is not possible or standard. Geoparquet is a common one now - a great format in many ways but spatial indexing isnt there.
Postgres may be out of fashion but still an old fashioned postgis server is the simplest solution sometimes.
rockinghigh41 minutes ago
I wouldn't say R-trees solve the problem better. Joining multiple spatial dataset indexed with r-trees is more complex as the nodes are dynamic and data dependent. Neighborhood search is also more complicated because parent nodes overlap.
pb0607 hours ago
Why do you consider Postgres + PostGIS out of fashion? What are people using for spatial data these days?
twelvechairsan hour ago
For use cases like this - long term geospatial people still use postgis as foundational - mainly for its speed at scale and spatial indexing.
For the wider tech world - I would say postgres suffers from being "old tech" and somewhat "monolithic". There have been a lot of trends against it (e.g. nosql, fleeing the monolith, data lakes). But also more practically for a lot of businesses geospatial is not their primary focus - they bring other tech stacks so something like postgis can seem like duplication if they already use another database, data storage format or data processing pipeline. Also some of the proliferation of other software and file formats have made some uses cases easier without postgis.
Really Id say the most common path ive seen for people who dont have an explicit geospatial background who are starting to implement it is to avoid postgis until it becomes absolutely clear that they need it.
analytically12 hours ago
At 500 stations:
- H3: 218µs, 4.7KB, 109 allocs
- Fallback: 166µs, 1KB, 37 allocs
- Fallback is 31% faster
At 1000 stations:
- H3: 352µs, 4.7KB, 109 allocs
- Fallback: 312µs, 1KB, 37 allocs
- Fallback is 13% faster
At 2000 stations:
- H3: 664µs, 4.7KB, 109 allocs
- Fallback: 613µs, 1KB, 37 allocs
- Fallback is 8% faster
At 4500 stations (real-world scale):
- H3: 1.40ms, 4.7KB, 109 allocs
- Fallback: 1.34ms, 1KB, 37 allocs
- Fallback is 4% faster
Conclusion: The gap narrows as station count increases. At 4500 stations they're nearly equivalent. H3 has fixed overhead (~4.7KB/109 allocs for k=2 ring), while fallback scales linearly. The crossover point where H3 wins is likely
around 10-20K entries.hiddew9 hours ago
Also see PostGIS for Postgres systems: https://postgis.net/docs/.
mcherm7 hours ago
Why does H3 use a nonoverlapping set of hexagons? A square grid would make it even simpler and faster to calculate. I am perfectly happy to believe that a hex grid works better for some reason but what is that reason?
kbaker4 hours ago
Maybe this comparison with S2 will explain:
RaczeQ3 hours ago
The main design goal was to make the distance between neighbours constant. With squares, you have 4 side neighbours and 4 corner neighbours. With hexagons, it's easier to interpolate paths and analyse distances.
galkk11 hours ago
Ohh, every geo join/spatial thing with picture that consists of those small cells over map is such pet peeve of mine. Facebook marketplace, craigslist, tinder, any app with “proximity search”.
No, this city isn’t 4 miles from my city. There is a literal lake between us. It’s 10+ miles.
Please, invent something, do precompute, but just avoid naive-ish searches.
boxed10 hours ago
Is this related to the article?
cyanydeez9 hours ago
Hes just angry hes not a crow.
mgaunard12 hours ago
Why doesn't it use k-d trees or r-trees?
cpa11 hours ago
The big reason is that H3 is data independant. You put your data in predefined bins and then join on them, whereas kd/r trees depend on the data and building the trees may become prohibitive or very hard (especially in distributed systems).
mgaunard11 hours ago
Indices are meant to depend on the data yes, not exactly rocket science.
Updating an R-tree is log(n) just like any other index.
rockinghigh38 minutes ago
How do you join two datasets using r-trees? In a business setting, having a static and constant projection is critical. As long as you agree on zoom level, joining two datasets with S2 and H3 is really easy.
vouwfietsman9 hours ago
I think the key is in the distributed nature, h3 is effectively a grid so can easily be distributed over nodes. A recursive system is much harder to handle that way. R-trees are great if you are OK with indexing all data on one node, which I think for a global system is a no-go.
This is all speculation, but intuitively your criticism makes sense.
Also, mapping 147k cities to countries should not take 16 workers and 1TB of memory, I think the example in the article is not a realistic workload.
cpa9 hours ago
To add to sibling comment, if you have streaming data you have to update the whole index every time with r/kd trees whereas with H3 you just compute the bin, O(1) instead of O(log n).
Not rocket science but different tradeoffs, that’s what engineering is all about.
avereveard13 hours ago
nice writeup, terrible website garbling the page history
I wonder how it compare with geohashing, I know it is not as efficient in term of partitioning and query end up weird since you need to manage decoding neighbor cells but finding all element of a cell is a "starts with" query which allow to put data effectively on most nosql databases with some sort of text sorting