Some of the most useful internal systems do not automate workflow directly. They improve judgment.
This one did that by turning provider coverage into something teams could see, inspect, and compare geographically rather than infer from fragmented tables.
The system was an internal network intelligence map for analyzing provider coverage across India at the pincode level.
The question behind it was straightforward:
Where is coverage strong, where is it weak, and where should network expansion be investigated more intelligently?
The problem
Provider network analysis becomes abstract very quickly.
It is easy enough to count providers by city, state, or category. That tells you something, but not enough. The more useful questions are spatial:
- Which pincodes are well covered?
- Which ones show obvious gaps?
- Which internal providers exist inside a selected pincode?
- What external medical availability exists nearby?
- Where does the network look thinner than it should?
That is where spreadsheets stop being enough.
Once proximity, coverage, and comparison start to matter, the network has to be understood geographically.
What we built
We built an internal GIS-style decision-support tool that brings those questions onto a map.
At its core, the system allows teams to:
- visualize pincode coverage as polygons
- inspect internal providers within a selected pincode
- compare internal network presence against nearby external medical places
- filter across business and geographic dimensions
- investigate potential expansion opportunities more directly
The product supports both MapLibre and Google Maps, and combines pincode coverage, internal providers, and external comparison data into a single exploration surface.
The point was not just to display more information. It was to make the network easier to reason about.
What mattered in the build
A few choices made the system materially better to use.
At low zoom, simplified pincode geometry kept the map responsive without losing the overall coverage picture.
Tile-based loading made exploration practical at scale instead of forcing the interface to carry too much at once.
On hot backend paths, faster numeric bounding-box query logic replaced heavier spatial matching in the critical path. One optimization alone reduced empty tile miss time from roughly 190 ms to around 33 ms.
Filtering also needed more discipline than it first appeared. Cleaning up how filters were represented and matched made query behavior more predictable and reduced avoidable drag in the interaction loop.
And where result caps were necessary, the system surfaced that clearly instead of pretending partial views were complete. That made the product easier to interpret and more trustworthy.
What I took from it
A few things stood out while building it.
Internal tools deserve real product thinking. If they influence actual decisions, the quality bar should not be lower because the software is not customer-facing.
Maps only become useful when the surrounding UX is disciplined. The filters, inspection flow, loading behavior, and detail views matter as much as the map itself.
Performance is also part of trust. In systems built for exploration, responsiveness affects whether people believe what they are seeing.
And comparison is where the product becomes more useful. Showing internal coverage alone is helpful. Showing it against nearby external availability is much more informative.
That changes the question from “what do we have here?” to “how strong is the network here, really?”
That is a better system to build.
Closing thought
I’m most interested in software that sits close to execution.
Not because it is glamorous, but because it often has disproportionate leverage. These are the systems that shape how teams understand a problem, where they look next, and how effectively they act.
This project fit that pattern. It turned provider and geography data into something teams could work with more intelligently.