6 free tools to visualize dependency graphs, detect version conflicts, analyze monorepo package relationships, simulate updates, and track version drift — directly in the browser.
6 free toolsWhat is a dependency graph and how do I visualize it?
A dependency graph maps all the packages your project depends on — including transitive dependencies pulled in by your direct dependencies. To visualize it, you can run npm ls --all or use tools like our Dependency Graph Visualizer. Circular dependencies (A depends on B which depends on A) are the most common source of build errors in complex projects.
How do I detect version conflicts in my npm dependencies?
Version conflicts in npm occur when two packages require incompatible versions of a shared dependency. Run npm ls <package-name> to see all resolved versions of a specific package. Our Version Conflict Checker parses your package.json and package-lock.json to identify conflicts, duplicate installs, and incompatible peer dependencies before they cause runtime errors.
What is version drift in a monorepo?
Version drift happens when different packages in a monorepo use different versions of the same dependency — for example, one service using React 17 while another uses React 18. This creates inconsistency, makes shared components harder to maintain, and can cause subtle runtime bugs. Tools like Nx and Turborepo provide enforce-module-boundaries rules to prevent drift, and our Version Drift Analyzer helps you surface it across all packages at once.
How do I safely simulate a major npm dependency update?
Before running a major upgrade (e.g. webpack 4 to 5, or React 17 to 18), you should check for breaking changes in the changelog, identify which of your packages have peer dependency constraints on the old version, and run the update in a branch with a full test suite. Our Update Simulator helps you assess risk by evaluating semver compatibility and identifying the chain of packages that would be affected by a given version bump.
What causes node_modules to become so large?
The main contributors to node_modules size are: transitive dependencies (your dependencies' dependencies), duplicate packages installed at multiple versions, packages with large compiled binaries (like puppeteer or sharp), and development dependencies included in production installs. Use npm install --production to exclude devDependencies in production, and our Repo Size Analyzer to find the heaviest packages in your install footprint.
Building a cross-repo call graph to track who depends on a deprecated Protobuf field is one of the harder dependency problems in distributed systems — it requires analyzing generated code across multiple repositories, not just package.json or go.mod files. Here is the most reliable approach, broken down by step.
Before building any graph, mark the field with the deprecated = true option in your .proto file. This is separate from the reserved keyword — deprecated = true keeps the field usable but signals intent, while reserved prevents future use of the field number. In proto3: int32 old_field = 3 [deprecated = true];. This gives generated SDKs a warning hook and makes the field findable by static analysis across generated code.
If you publish your Protobuf schema to the Buf Schema Registry (BSR), you can run buf breaking against all downstream consumers that have declared your module as a dependency in their buf.yaml. The BSR dependency graph gives you a list of every module that imports your .proto — this is the fastest way to enumerate affected repos without manual searching. Run buf dep graph on your module to see the direct consumer list.
Generated Protobuf code (Go structs, Python dataclasses, Java POJOs) has predictable field accessor patterns. Use ast-grep or semgrep with a pattern matching the deprecated field name: e.g. ast-grep --pattern '$_\.OldField' --lang go across a cloned copy of all consumer repos. This finds actual call sites — not just import declarations — so you know which services read or write the field, not just which ones import the proto package.
For organizations with many repos, running local clones is impractical. Sourcegraph indexes all your repos and supports structural search — search for the generated field accessor pattern across your entire codebase with one query. grep.app does the same for public GitHub repos. The result is a cross-repo call graph showing every file and function that references the deprecated field, which you can export and feed into a dependency graph visualizer.
Once you have the list of call sites, group them by repository and cross-reference with your CODEOWNERS or team ownership file. The output is a dependency graph where each node is a repo and edges represent "this repo uses the deprecated field." Sort by frequency of use to prioritize migration. Repos with high usage need a migration path before you can mark the field as reserved. This graph is what the Dependency Graph Visualizer above helps you visualize once you have the edge list.
buf CLI + BSR
Best for: teams already using the Buf Schema Registry. buf dep graph gives you the full dependency graph of your Protobuf module across all registered consumers. No code cloning required.
ast-grep / semgrep
Best for: finding actual field access sites in generated code across cloned repos. Supports Go, Java, Python, TypeScript. Outputs precise file:line locations for each call site.
Sourcegraph structural search
Best for: large engineering organizations. Indexes all repos and supports cross-language structural search without cloning. The :[field] pattern syntax finds accessor usage precisely.
GitHub code search (REST API)
Best for: public repos or GitHub Enterprise. Use GET /search/code?q=OldField+language:go to enumerate all repos using the generated field name across your org.
deps.dev + OpenSSF
Best for: package-level dependency graphs (not field-level). Maps which modules import which Protobuf package version. Useful for finding the consumer list before doing call-site analysis.
Neo4j + code ingestion
Best for: persistent, queryable dependency graphs. Ingest AST output into a graph database and query with Cypher: MATCH (s)-[:CALLS]->(f:Field {name:'old_field'}) RETURN s.
deprecated = true is the right first step — it preserves backward compatibility while signaling intent. reserved should only be used after all consumers have migrated, because it makes the field number and name permanently unavailable for future use. The call graph you build in steps 1–5 above gives you the migration checklist before you can safely move from deprecated to reserved. If a field is in a widely-consumed shared schema, expect 2–6 months of migration time across dependent teams.