You Can't Protect What You Can't See: Building an Identity-Centric Visibility Platform
A roadmap for identity visibility: instrument telemetry, automate discovery, and map service boundaries for better access governance.
Mastercard’s Gerber captures a hard truth that every CISO eventually learns: security is a visibility problem before it is a control problem. If you cannot reliably see your identities, devices, services, and trust boundaries, then every downstream decision—MFA policy, access governance, incident response, compliance reporting—rests on partial data. That is why the modern answer is not another isolated tool, but an identity-centric visibility platform that continuously instruments the full access path and maps what is actually in scope. For teams building this foundation, the challenge is less about acquiring more data and more about turning telemetry into defensible decisions, a theme that also shows up in our guide on evaluating data analytics vendors for mapping projects, where the quality of boundaries matters as much as the data itself.
In practice, identity visibility means more than knowing who logged in. It means understanding the entire interaction layer: which human or workload identity requested access, from which device, over which network path, to which service, through which boundary, and under which policy. Organizations that treat this as a dashboard exercise typically end up with pretty charts and blind spots. Organizations that treat it as an operational architecture can discover unmanaged assets faster, shrink their attack surface, and make access governance measurable. If you’re also wrestling with adjacent systems and policy sprawl, our article on internal chargeback systems for collaboration tools offers a useful lesson: governance works when usage, ownership, and cost are visible enough to act on.
Why Visibility Has Become the New Security Control Plane
The old perimeter is gone; the real boundary is now identity
In cloud and hybrid environments, the perimeter is no longer a network edge—it is the policy boundary around identity. Users authenticate from unmanaged laptops, contractors connect from home networks, applications call APIs through service accounts, and devices continuously rotate between trusted and untrusted states. That means access decisions are being made in a context-rich environment where “who” alone is insufficient. If your platform cannot connect identity to device posture, network lineage, and service exposure, you are making authorization decisions with incomplete evidence.
This shift is especially acute for CISOs because the visibility problem compounds in every direction. Attackers do not need to defeat all controls; they only need to exploit one unmanaged asset, one stale entitlement, one over-permissioned service account, or one boundary that was assumed but never actually mapped. A mature visibility layer reduces that uncertainty by continuously discovering what exists and what changed. For teams building modern identity programs, this complements the practical principles covered in balancing innovation with security skepticism, where the core theme is the same: adopt new capabilities only when you can verify their risks.
Identity visibility is operational, not cosmetic
Many organizations mistake inventory for visibility. A quarterly export from an IAM system, CMDB, or EDR tool may tell you what was known at a point in time, but it will not tell you whether that asset is still present, whether the account is still active, or whether the service boundary has shifted. Real visibility requires telemetry streaming from identity providers, endpoint agents, network sensors, cloud control planes, and application logs. The platform then correlates those signals into a live model of trust. That model is what makes access governance enforceable rather than theoretical.
To understand why that matters, compare it to how teams evaluate volatile markets or regulated systems: they look for signals, not snapshots. The same mindset appears in industry analyst coverage of banking, industrial, and consumer spending, where trend visibility matters more than a single datapoint. Security teams need that same continuous viewpoint. When you can see identity activity as a stream, you can detect anomalies earlier, validate policy drift faster, and prioritize remediation based on what is truly exposed.
Mastercard’s thesis translated into architecture
Gerber’s warning is often summarized as a strategic statement, but it is actually an architectural requirement. If you cannot define where your environment begins and ends, you cannot confidently assign controls, scope audits, or respond to incidents. That means an identity-centric visibility platform must solve three problems at once: discover assets automatically, map service boundaries accurately, and enrich every access decision with context. In other words, visibility is not a reporting feature; it is the substrate for control.
This is where identity infrastructure teams must think like platform engineers. Instead of asking, “What dashboards do we need?” ask, “What telemetry do we need to trust an access decision?” That mindset is similar to the practical approach in predictive maintenance for fleets: the goal is not merely to collect sensor data, but to act on it before failure occurs. Security is the same. If telemetry arrives too late, is too sparse, or cannot be correlated, it may look informative while still being operationally useless.
What an Identity-Centric Visibility Platform Must See
Identity telemetry: human, machine, and service identities
The first telemetry layer is identity itself. That includes workforce identities, privileged admin accounts, contractors, customers, service accounts, API clients, and workload identities in Kubernetes or serverless platforms. Each has different lifecycle rules, entitlements, and risk profiles, but they should all be visible through a common schema. Without that, you cannot answer basic questions such as which accounts are dormant, which are overprivileged, and which are tied to critical systems.
Good identity telemetry should include authentication events, token issuance, failed logins, factor enrollment, privilege escalation, conditional access outcomes, and session duration. It should also capture identity attributes that affect risk: role, department, location, device trust level, and whether the account is managed by a source-of-truth system. If your program also evaluates external trust relationships or vendor risk, the due-diligence mindset from a lightweight due diligence scorecard is useful here: establish a repeatable rubric rather than relying on intuition.
Device telemetry: posture, ownership, and drift
Identity without device context creates false confidence. A legitimate user on a compromised or unmanaged endpoint can be just as dangerous as an attacker with stolen credentials. Device telemetry should therefore include operating system version, patch level, EDR status, disk encryption, jailbreak/root status, certificate presence, and enrollment state in MDM/UEM. Where possible, the platform should also track hardware identity, device ownership, and historical trust classification.
That data becomes powerful when it is used to evaluate risk in motion. A device that passed compliance this morning may be noncompliant by afternoon if its EDR agent stops reporting or a critical patch is missed. Continuous device telemetry can then trigger a step-up authentication challenge, limit access to sensitive apps, or quarantine the session altogether. For a useful analogy in product evaluation, see what to check before a broad PC upgrade: timing, compatibility, and state matter more than a one-time purchase decision.
Network and service telemetry: where identity actually lands
The third telemetry layer is network and service context. This includes source IP, ASN, geolocation, VPN or proxy use, DNS resolution patterns, east-west traffic, API endpoint metadata, and service mesh traces. In a microservices environment, the important boundary is often not the subnet but the service-to-service relationship. You need to know which identity called which service, through which route, with what latency, and under what trust assumptions. That is the only way to model where access should begin and end.
This is also where observability and security finally converge. Logs alone are insufficient if they lack topology; topology alone is insufficient if it lacks identity. The visibility platform should enrich requests with both, creating a graph that shows relationships instead of isolated events. If you want a helpful analogy for how systems are shaped by external constraints and route choices, look at alternate airport planning under disruption: good operators always know the fallback paths, not just the preferred one.
Automated Discovery: Finding What You Forgot You Had
Asset discovery must span cloud, endpoint, SaaS, and code
Most visibility failures begin with incomplete discovery. Shadow IT, forgotten SaaS tenants, old cloud accounts, unmanaged endpoints, orphaned service accounts, and undocumented APIs all expand the attack surface. Automated discovery must therefore operate across multiple layers: cloud inventory, identity directories, endpoint enrollment, application registries, DNS records, container orchestration, and code repositories. The platform should continuously reconcile those sources rather than assuming any one of them is authoritative.
Discovery also needs to understand business structure, not just technical presence. A service that still exists in production but is no longer used by a critical business process is a different risk than a service that handles payments or employee identity flows. That is why mapping ownership matters as much as finding an asset. The lesson is similar to directory management for multi-location businesses: the system is only useful when every location, owner, and operational dependency is visible.
How to discover identity sprawl without breaking production
Automated discovery should begin in observation mode. Ingest identity logs, cloud API events, and endpoint inventory, then infer candidates for dormant, duplicate, or orphaned assets. Correlate findings against HR systems, CMDB data, and IAM directories, and score confidence based on evidence quality. Once the model is stable, add controls such as access reviews, just-in-time approval workflows, and stale-account disabling with human override. The point is to reduce blind spots without causing an outage through overzealous enforcement.
In mature programs, discovery is not a one-time project but a continuous control. Every new service, device, tenant, or account should be discovered automatically, classified, and assigned to an owner. If you need a practical analogy for building resilient operating loops, automating workflows after an I/O bottleneck shows why instrumentation must come before automation. You cannot automate what you cannot reliably detect.
Prioritize the assets that matter most
Discovery should not treat every asset equally. A CISO needs a model that prioritizes identities and services based on privilege, data sensitivity, business criticality, and exposure. For example, a dormant admin account connected to a production database is more urgent than an unused test account in a sandbox. Likewise, a public API with weak boundary enforcement is more urgent than an internal-only service with strong service-to-service authentication. The platform should rank these findings automatically so teams can focus on meaningful risk.
That prioritization logic is similar to how strategists assess market signals: not every signal deserves the same response. In a world of limited security bandwidth, choosing where to act is as important as knowing what exists. That’s also why the comparative method in how to compare access models and vendor maturity can be instructive for security tooling—capability without operating maturity is rarely enough.
Mapping Service Boundaries for Access Decisions
Boundaries are logical, not just network-based
One of the most important insights in identity-centric visibility is that service boundaries are rarely defined by VLANs or IP ranges anymore. They are defined by trust relationships, API contracts, data sensitivity, and authorization semantics. A single cloud application may span multiple regions, accounts, clusters, and SaaS integrations, each with different control assumptions. If the platform cannot map those boundaries, access decisions will be either too broad or too brittle.
A boundary map should capture who can call what, from where, under which conditions, and with what fallback behavior. It should identify the boundary owner, the data types crossing the boundary, the authentication mechanism in use, and whether the service is internet-facing, partner-facing, or internal-only. This is where mapping becomes governance. If you’re familiar with how operators model physical and business routes in supply-chain journeys, the principle is the same: the route matters, not just the endpoint.
Build an access boundary graph, not a spreadsheet
A spreadsheet can capture a list of systems; it cannot model the dynamic interdependence between identities, devices, services, and data flows. A boundary graph can. In a graph model, every identity is a node, every service is a node, every access path is an edge, and every edge is enriched with context such as sensitivity, trust level, and policy outcome. That graph allows you to ask better questions: Which privileged accounts can reach customer data? Which service accounts are crossing trust zones? Which devices are touching regulated workloads?
That graph also makes blast-radius analysis much more realistic. If a token is compromised, you can trace all reachable services and enforce containment based on actual relationships rather than broad assumptions. This is analogous to how analysts study cascading failures across domains, as seen in comparisons of ancient catastrophes and modern drivers: the critical question is how one event propagates through interconnected systems. Security incidents propagate the same way.
Use boundaries to drive access policy, not just documentation
Boundary maps become valuable only when they inform decisions. For example, if a service boundary is labeled high sensitivity and internet-exposed, you might require phishing-resistant MFA, device compliance, session re-authentication, and stricter token lifetime controls. If a boundary is internal, low sensitivity, and workload-only, you might permit certificate-based auth with narrow scopes and short-lived credentials. The point is to tie policy to actual service boundaries rather than applying blanket controls that frustrate users without reducing risk.
Teams building policy-driven systems often discover that the hard part is not technical enforcement but semantic agreement. Everyone must agree on what the boundary means, who owns it, and what evidence is required to change it. That’s the same reason contracts and IP governance around AI-generated assets matters: once boundaries are ambiguous, enforcement becomes contested. Security boundaries are no different.
Telemetry Architecture: From Raw Signals to Trust Decisions
Design for correlation, not just collection
A visibility platform should ingest telemetry from the identity provider, endpoint tools, cloud control planes, proxies, application gateways, service mesh, and SIEM, then normalize them into a shared event schema. The key is correlation: the same user should be recognizable across sessions, the same device across tools, and the same service across environments. Without normalization, you will drown in tool-specific data and still fail to answer simple risk questions.
One useful design pattern is a three-layer pipeline. First, collect raw events into a durable stream. Second, enrich events with identity attributes, asset ownership, boundary classification, and threat intelligence. Third, publish policy-ready records to enforcement points such as conditional access engines, PAM workflows, and access review systems. This architecture keeps the control plane decoupled from the sensors while still letting them act in near real time. For a parallel in workflow design, PromptOps-style reusable components demonstrates how repeatability turns ad hoc practices into a maintainable system.
Observability and security should share a data model
Security teams often build separate telemetry stacks from platform engineering, but identity-centric visibility works best when observability and security share the same underlying context. If app performance dashboards know service topology but security tools do not, you will miss attacks that look like traffic anomalies. If security tools know identities but not latency or routing, you will miss the operational fingerprints of compromise. A shared model lets SRE, IAM, and security teams reason over the same truth.
This convergence is increasingly important for incident response. During an event, the first question is not “Which alert fired?” but “What changed, who is affected, and what boundaries were crossed?” That is why observability must feed access governance directly. When visibility is unified, a suspicious session can be downgraded, token scopes can be constrained, and affected services can be segmented before an attacker reaches crown-jewel systems.
Choose measurable outcomes over sensor counts
It is easy to measure how many logs you ingest; it is harder to measure whether you can make better decisions because of them. Focus on outcomes such as time-to-discover unknown assets, time-to-detect privilege anomalies, percentage of service accounts mapped to owners, percentage of accesses tied to verified device posture, and reduction in orphaned entitlements. These metrics connect telemetry to risk reduction and make the case to leadership more credible.
That mindset mirrors product and infrastructure decision-making in other domains, such as whether to upgrade now or wait. Good leaders do not buy tools because they are new; they buy them because the timing and expected return are clear. Your visibility platform should be justified the same way.
Identity Visibility for Access Governance and CISO Oversight
Turn visibility into least privilege at scale
Least privilege fails when entitlements are assigned once and never revisited. An identity-centric visibility platform gives access governance the evidence needed to revoke stale access, narrow scopes, and enforce just-in-time access with confidence. When you know which identities are actually using which services, you can remove unused permissions and reduce blast radius without disrupting legitimate work. This is especially important in enterprises with multiple business units, where ownership often drifts as teams reorganize.
The strongest programs tie access review to live usage data rather than self-attestation alone. If an account has not touched a production boundary in 90 days, it should trigger a review. If a service account suddenly starts accessing a new database, it should require justification. If a contractor device is no longer compliant, access should degrade automatically. This is what access governance looks like when it is grounded in real telemetry rather than policy theater.
CISO reporting should be boundary-aware
Executives do not need raw event counts; they need a risk picture. A CISO should be able to report how much of the environment is mapped, how many assets remain unknown, which high-risk boundaries lack strong controls, and how quickly the organization can react when trust changes. Boundary-aware reporting makes security posture legible to business leadership because it ties technical findings to operational scope.
For organizations with strong compliance obligations, this also improves audit readiness. When asked to prove control coverage, you can show discovery evidence, boundary maps, identity telemetry, and policy enforcement results. That is much stronger than relying on periodic screenshots or manual spreadsheets. The same principle of proving operational readiness appears in genomic surveillance: continuous monitoring is what makes the response credible.
Use telemetry to reduce fraud and account takeover
Identity-centric visibility is not only about internal governance; it also helps defend customer-facing systems against account takeover and fraud. By correlating device reputation, login velocity, geolocation shifts, behavioral anomalies, and impossible travel signals, the platform can identify suspicious sessions before damage spreads. The result is better fraud prevention with less friction for legitimate users, especially when step-up authentication is applied only when the context warrants it.
If you are designing customer journeys or payment flows, the logic is similar to the threat modeling work in designing payment flows for live commerce. The right control at the right moment reduces fraud without punishing every user. That balance is the practical goal of identity visibility.
A Practical Roadmap: How to Build the Platform in Phases
Phase 1: establish the telemetry foundation
Start by inventorying your highest-value data sources: identity provider logs, endpoint management, cloud audit logs, VPN/proxy telemetry, and application access logs. Normalize identities across those sources using stable IDs, not display names alone. Define the minimum data model: actor, device, service, action, outcome, sensitivity, and owner. Then pick a small set of critical workflows—such as privileged access, customer data access, or SaaS admin actions—to validate the pipeline.
At this stage, resist the temptation to boil the ocean. Your first goal is not perfect coverage; it is trustworthy signal. If your organization also manages complex reporting or documentation pipelines, the approach is similar to turning webinars into learning modules: start with the most reusable content, then standardize from there. A narrow, well-instrumented beginning will outperform a broad but shallow rollout.
Phase 2: automate discovery and boundary mapping
Once the telemetry is flowing, build discovery jobs that identify unknown assets, stale access, undocumented service accounts, and unowned workloads. Feed those findings into a boundary graph that links services to data domains and access paths. Validate the graph with app owners and platform teams, because automated inference should be challenged by human context. The purpose is to converge on a reliable map, not to replace ownership.
Then begin policy enforcement in low-risk areas. For example, require ownership assignment for any new service, short-lived credentials for newly discovered workloads, and device compliance for access to sensitive dashboards. Over time, expand the model to more critical services. This phased approach mirrors the pragmatic sequencing in AI-driven EDA adoption: establish measurable wins first, then scale the system.
Phase 3: operationalize access decisions and continuous improvement
The final phase is where visibility becomes control. Integrate the platform with conditional access, PAM, IGA, SOAR, and ticketing workflows. Use telemetry to trigger access reviews, enforce step-up authentication, quarantine suspicious sessions, and retire unused privileges. Make boundary map changes part of your change management process so service owners are accountable for the trust relationships they create.
From there, continuously tune the model. Measure false positives, missed discoveries, time-to-remediate, and business impact. Review the telemetry quality monthly and refine your schemas as new applications, clouds, and device types are added. This is how a platform evolves from a project into an operating capability.
Comparison Table: Visibility Approaches and Their Operational Tradeoffs
| Approach | Primary Data Sources | Strengths | Weaknesses | Best Use Case |
|---|---|---|---|---|
| Quarterly asset inventory | CMDB, spreadsheets, manual exports | Simple, cheap, easy to explain | Stale quickly, poor ownership clarity, no live context | Basic audit prep |
| SIEM-only visibility | Security logs, alerts, event streams | Good for incident detection and correlation | Lacks business context and service boundaries | Security operations |
| IAM-centric visibility | IdP, directory, MFA, SSO logs | Strong identity lifecycle insight and access control linkage | Weak on device posture and service topology | Access governance and auth policy |
| Endpoint + network telemetry | EDR, MDM, VPN, proxy, DNS, NDR | Strong device and path visibility | Identity correlation can be incomplete | Threat hunting and risk scoring |
| Identity-centric visibility platform | IAM, endpoint, cloud, app, service mesh, CMDB, HR | Unified view of identity, device, network, and boundary context | Requires integration, schema design, and ownership discipline | Enterprise access governance and attack-surface reduction |
Common Pitfalls That Undermine Visibility Programs
Collecting too much data without a model
Teams often assume that more telemetry equals better visibility, but unmanaged data volume can obscure real risk. If you do not normalize identities, assign ownership, and define boundaries, you will create noise at scale. The solution is not fewer sources; it is a tighter semantic model. Every event should answer a business question, or it should not be on the critical path.
Confusing inventory with control
A discovered asset is not a controlled asset. A mapped boundary is not an enforced boundary. A classified identity is not a governed identity. Visibility must connect to action, whether that action is remediation, policy enforcement, or an access review. Otherwise, you are building a reporting layer that gives false assurance.
Ignoring business ownership
Technical teams often discover assets faster than business teams can own them, and the result is a backlog of orphaned findings. That backlog becomes dangerous when nobody feels responsible for a service, account, or boundary. Every discovered object should have an owner, an escalation path, and a remediation deadline. If ownership is unclear, treat the issue as a control failure, not a documentation inconvenience.
Pro Tip: The fastest way to improve identity visibility is to start with “high-risk, high-change” systems: privileged admin accounts, customer-facing apps, and service-to-service APIs. These are where telemetry quality produces the largest security payoff.
FAQ
What is identity visibility in practical terms?
Identity visibility is the ability to continuously see who or what is accessing your systems, from which device, over which network path, to which service, and under what policy. It combines identity, endpoint, network, and service telemetry into a single decision-making view. In practice, it helps security and IAM teams detect risk faster and govern access with confidence.
How is identity visibility different from observability?
Observability focuses on understanding system behavior from logs, metrics, and traces, usually for performance or reliability. Identity visibility uses similar principles but centers on trust, access, and authorization boundaries. The two are complementary: observability tells you what the system is doing, while identity visibility helps you determine whether the actor and access path should be trusted.
What telemetry sources should I integrate first?
Start with your identity provider, endpoint management platform, cloud audit logs, and application access logs. Those sources give you the quickest path to a trustworthy baseline of users, devices, sessions, and resource access. After that, add VPN, proxy, DNS, service mesh, and CMDB data to improve boundary mapping and reduce blind spots.
How do I map service boundaries without creating a huge manual project?
Begin by inferring boundaries from traffic patterns, application metadata, ownership records, and authentication flows. Use automated discovery to propose relationships, then validate them with service owners in small batches. Over time, enforce boundary registration as part of change management so every new service or API must declare its trust zone and data exposure.
What metrics prove that a visibility platform is working?
Useful metrics include percentage of assets discovered automatically, percentage of identities linked to an owner, number of orphaned service accounts eliminated, time-to-detect a new high-risk boundary, and percentage of sensitive access events enriched with device posture. You should also track reduction in excessive privileges and faster remediation of stale access.
Can this reduce account takeover and fraud risk?
Yes. When identity, device, and network signals are correlated, suspicious sessions become easier to detect and stop. The platform can trigger step-up authentication, limit session scope, or quarantine risky activity when it sees anomalies such as impossible travel, unusual device posture, or abnormal service access. That reduces fraud while preserving a smoother experience for legitimate users.
Conclusion: Visibility Is the First Control You Need to Trust
Mastercard’s visibility thesis is compelling because it reflects the operational reality of modern infrastructure: you cannot secure what you cannot continuously observe. Identity-centric visibility is the practical response. It combines telemetry, automated discovery, and boundary mapping into a platform that helps CISOs reduce attack surface, improve access governance, and prove control coverage without relying on stale assumptions. Done well, it becomes the shared truth between identity, security, platform engineering, and compliance.
The next step is not to buy another dashboard. It is to build a model of your environment that is rich enough to trust and dynamic enough to use. Start with the telemetry you already have, add the boundary context you are missing, and use the resulting graph to make access decisions measurable. If you want to expand your thinking further, consider how adjacent operational disciplines—like managing flexible travel constraints, tooling creative workflows, and understanding how narratives shape decisions—all depend on seeing the system clearly before acting. Security is no different. Visibility is the control plane.
Related Reading
- Designing Payment Flows for Live Commerce: Threat Models, UX and Defenses - Learn how to balance security controls with customer friction in high-risk transactions.
- AI in Tech Companies: Balancing Innovation with Security Skepticism - A pragmatic view of adopting new tech without losing control.
- How to Evaluate Data Analytics Vendors for Geospatial Projects - A useful framework for judging tooling, data quality, and boundary confidence.
- Vaccines, Variants, and the Road: Understanding Genomic Surveillance for Safer Travel - See how continuous surveillance creates better response decisions.
- Adopting AI-Driven EDA: Where to Start, Common Pitfalls, and Measurable ROI for Chip Teams - A strong example of phased adoption with measurable outcomes.
Related Topics
Daniel Mercer
Senior Identity Infrastructure Editor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you