pyenvsearch
softwarepython package exploration for ai agents that stops them from hallucinating apis
pyenvsearch is a command-line tool that helps ai coding agents like claude and gemini understand your actual installed python packages instead of making up imaginary classes and methods. it solves the critical problem that most agents respect .gitignore
files and can’t see into .venv/site-packages
where your actual library code lives.
problem it solves
when ai agents try to help with code, they often hallucinate apis that don’t exist - especially for private libraries or packages that have changed since model training. even worse, agents like claude code can’t look into .venv
folders because they respect .gitignore
, leaving them blind to what’s actually installed.
pyenvsearch gives agents the tools to:
- see what’s really in your virtual environment (bypassing
.gitignore
restrictions) - get actual function signatures and class definitions from installed packages
- search through site-packages to find the real apis, not imagined ones
- provide structured json output that agents can parse and use
key features
enhanced object inspection - a replacement for python’s dir()
that actually tells you what things are:
from pyenvsearch import enhanced_dir
enhanced_dir(some_module) # shows types, signatures, docstrings
package exploration - commands that help agents understand what’s installed:
pyenvsearch find <package>
- where is this package actually installed?pyenvsearch toc <package>
- what’s the structure of this package?pyenvsearch search "class.*Client"
- find actual classes, not imagined onespyenvsearch list-methods <package>
- get real method signatures
installation
uv tool install pyenvsearch # recommended
# or
pipx install pyenvsearch
design philosophy
the tool is built with zero heavy dependencies (python standard library only) and outputs everything as json so agents can actually use it. it works around the .gitignore
problem by providing a bridge between what’s installed and what agents can see.
all commands support --json
for structured output, making it trivial for agents to parse results and understand your actual python environment instead of guessing based on outdated training data.