Metadata-Version: 2.4
Name: agentberlin
Version: 0.41.0
Summary: Python SDK for Agent Berlin - AI-powered SEO and AEO automation
Project-URL: Homepage, https://agentberlin.ai
Project-URL: Documentation, https://docs.agentberlin.ai/sdk/python
Project-URL: Repository, https://github.com/boat-builder/agentberlin
Author-email: Agent Berlin <support@agentberlin.ai>
License: MIT
License-File: LICENSE
Keywords: aeo,ai,analytics,optimization,search,seo
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Typing :: Typed
Requires-Python: >=3.9
Requires-Dist: click>=8.0.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: requests>=2.28.0
Provides-Extra: dev
Requires-Dist: black>=24.0.0; extra == 'dev'
Requires-Dist: isort>=5.13.0; extra == 'dev'
Requires-Dist: mypy>=1.8.0; extra == 'dev'
Requires-Dist: pytest-cov>=4.1.0; extra == 'dev'
Requires-Dist: pytest>=8.0.0; extra == 'dev'
Requires-Dist: responses>=0.25.0; extra == 'dev'
Requires-Dist: types-requests>=2.31.0; extra == 'dev'
Description-Content-Type: text/markdown

# Agent Berlin Python SDK

## Installation

```bash
pip install agentberlin
```

Agent Berlin Python SDK provides AI-powered SEO automation.

### Quick Start

```python
from agentberlin import AgentBerlin
client = AgentBerlin()
```

### Important Notes

1. Scripts should print their results or return values that can be captured
2. Use proper error handling with try/except blocks

### Available APIs

- **ga4**: `get()` - Traffic, visibility, and performance analytics
- **pages**: `list()`, `search()`, `get()` - List pages, search pages, and get detailed page info
- **keywords**: `search()`, `list_clusters()`, `get_cluster_keywords()` - Search keywords, explore clusters
- **brand**: `get_profile()`, `update_profile()` - Brand profile management
- **google_cse**: `search()` - Google web search
- **serpapi**: `search()` - Multi-engine search with News, Images, Videos, Shopping
- **files**: `upload()` - Upload files to cloud storage
- **gsc**: `query()`, `get_site()`, `list_sitemaps()`, `get_sitemap()`, `inspect_url()` - Search analytics, sitemaps, URL inspection
- **reddit**: `get_subreddit_posts()`, `search()`, `get_post()`, `get_post_comments()`, `get_subreddit_info()` - Posts, comments, and subreddit info
- **bing_webmaster**: `get_query_stats()`, `get_page_stats()`, `get_traffic_stats()`, `get_crawl_stats()`, `get_site()` - Bing Webmaster Tools analytics

## API Reference

### ga4

Comprehensive analytics data from Google Analytics.

#### client.ga4.get()

Get GA4 analytics for the project.

Signature:
```python
analytics = client.ga4.get()
```

Returns Analytics object with:
- domain - The domain name
- domain_authority - Domain authority score (0-100)
- visibility.current_percentage - Current visibility percentage
- visibility.ranking_stability - Ranking stability score
- visibility.share_of_voice - Share of voice percentage
- visibility.history - List of VisibilityPoint(date, percentage)
- traffic.total_sessions - Total traffic sessions
- traffic.llm_sessions - Sessions from LLM sources
- traffic.channel_breakdown.direct - Direct traffic
- traffic.channel_breakdown.organic_search - Organic search traffic
- traffic.channel_breakdown.referral - Referral traffic
- traffic.channel_breakdown.organic_social - Social traffic
- traffic.channel_breakdown.llm - LLM-referred traffic
- traffic.daily_trend - List of DailyTraffic(date, sessions, llm_sessions)
- topics - List of TopicSummary(name, appearances, avg_position, topical_authority, trend)
- competitors - List of CompetitorSummary(name, visibility, share_of_voice)
- data_range.start - Data start date
- data_range.end - Data end date
- last_updated - Last update timestamp

### pages

A powerful search engine backed by yours and your competitors' crawled pages.

#### client.pages.list()

List pages with pagination and optional filtering.

Signature:
```python
result = client.pages.list(
    limit=100,           # Max pages (default: 100, max: 1000)
    offset=0,            # Pagination offset (default: 0)
    domain="example.com", # Optional: filter by domain
    own_only=True        # Optional: exclude competitor pages
)
```

Args:
- limit: Maximum pages to return (default: 100, max: 1000)
- offset: Pagination offset (default: 0)
- domain: Filter by domain (e.g., "competitor.com")
- own_only: If True, only return your own pages

Returns PageListResponse with:
- pages - List of PageListItem objects
- total - Number of pages in response
- limit - Limit used
- offset - Offset used

Each PageListItem has:
- url - Page URL
- title - Page title
- meta_description - Meta description (optional)
- h1 - H1 heading (optional)
- domain - Domain name (optional)

#### client.pages.search()

Search for similar pages with multiple input modes.

This unified method supports three input modes (exactly one required):
- query: Simple text query for semantic search across all pages
- url: Find similar pages to an existing indexed page
- content: Find similar pages to provided raw content

Note: For chunk-level analysis, use pages.get(url, detailed=True) instead.

Signature:
```python
# Query mode - semantic search
results = client.pages.search(
    query="SEO best practices",   # Text query for semantic search
    count=10,                      # Max results (default: 10)
    offset=0,                      # Pagination offset (default: 0)
    similarity_threshold=0.60      # Min similarity 0-1 (default: 0.60)
)

# URL mode - find similar pages to an indexed page
results = client.pages.search(
    url="https://example.com/blog/guide",  # URL of indexed page
    count=10
)

# Content mode - find similar pages to raw content
results = client.pages.search(
    content="This is my page content about SEO..."
)

# Query mode with filters
results = client.pages.search(
    query="SEO guide",
    domain="competitor.com",       # Filter by domain
    status_code="200",             # Filter by HTTP status
    topic="SEO",                   # Filter by topic name
    page_type="pillar"             # Filter by page type
)
```

Args:
- query: Text query for semantic search (mutually exclusive with url/content)
- url: URL of an indexed page (mutually exclusive with query/content)
- content: Raw content string (mutually exclusive with query/url)
- count: Max recommendations (default: 10, max: 50)
- offset: Pagination offset (default: 0)
- similarity_threshold: Min similarity 0-1 (default: 0.60)
- domain: Filter by domain (e.g., "competitor.com")
- own_only: If True, filter to only your own pages (excludes competitors)
- status_code: Filter by HTTP status code ("200", "404", "error", "redirect", "success")
- topic: Filter by topic name
- page_type: Filter by page type ("pillar" or "landing")

Returns PageSearchResponse with:
- source_page_url - URL of source page (only in URL mode)
- source_page_type - "pillar", "landing", or None
- source_assigned_topic - Topic if page is assigned as pillar/landing
- similar_pages - List of SimilarPage objects (always includes evidence and topic_info)

Each SimilarPage has:
- target_page_url - URL of recommended page
- similarity_score - Similarity score 0-1
- match_type - "page_to_page"
- page_type - "pillar", "landing", or None
- assigned_topic - Topic if target is assigned as pillar/landing
- title - Page title
- h1 - H1 heading
- meta_description - Meta description
- evidence - List of Evidence objects (h_path, text) for top chunks
- domain - Page domain
- status_code - HTTP status code
- topic_info - TopicInfo object (topics, topic_scores, page_type, assigned_topic)

#### client.pages.get()

Get detailed page information.

Signature:
```python
# Basic page details (metadata only)
page = client.pages.get(
    url="https://example.com/blog/article"
)

# Include page content (truncated to 500 chars)
page = client.pages.get(
    url="https://example.com/blog/article",
    content_length=500
)

# With similarity analysis (includes similar_pages and similar_chunks)
page = client.pages.get(
    url="https://example.com/blog/article",
    detailed=True  # Compute similar pages and chunks (expensive, 1-10s)
)
```

Args:
- url: The page URL to look up
- content_length: Max characters of content to return (default: 0, no content). There is no upper limit - pass a large value (e.g., 999999) to get full content. Note: pages can have large content (thousands of words), so large values will significantly increase response size.
- detailed: If True, compute and return similar_pages and similar_chunks (expensive)

Returns PageDetailResponse with:
- url - Page URL
- title - Page title
- meta_description - Meta description
- h1 - H1 heading
- domain - Domain name
- links.inlinks - List of incoming links (PageLink objects)
- links.outlinks - List of outgoing links (PageLink objects)
- topic_info.topics - List of topic names
- topic_info.topic_scores - List of topic relevance scores
- topic_info.page_type - "pillar" or "landing"
- topic_info.assigned_topic - Primary assigned topic
- content - Page content truncated to content_length (only returned if content_length > 0)
- content_length - Total content length in characters
- similar_pages - List of SimilarPage objects (only when detailed=True)
- similar_chunks - List of SimilarChunkSet objects (only when detailed=True)

### keywords

Search for keywords and explore keyword clusters (powered by SEMRush & DataForSEO)

#### client.keywords.search()

Search for keywords.

Signature:
```python
keywords = client.keywords.search(
    query="digital marketing",  # Search query
    limit=10,                   # Max results (default: 10)
    match_type="semantic"       # "semantic", "exact", or "contains" (default: "semantic")
)
```

Parameters:
- query - Search query string (required)
- limit - Maximum results to return (default: 10, max: 50)
- match_type - Search mode (default: "semantic"):
  - "semantic" - Hybrid AI search using embeddings and BM25
  - "exact" - Case-insensitive exact string match
  - "contains" - Case-insensitive substring/contains match

Returns KeywordSearchResponse with keywords list and total count.
Each keyword has:
- keyword - The keyword text
- volume - Monthly search volume
- difficulty - Difficulty score (0-100)
- cpc - Cost per click
- intent - Search intent: "informational", "commercial", "transactional", "navigational"

Examples:
```python
# Semantic search (default) - finds related keywords
results = client.keywords.search(query="running shoes")

# Exact match - finds only "running shoes" exactly
results = client.keywords.search(query="running shoes", match_type="exact")

# Substring match - finds keywords containing "running"
results = client.keywords.search(query="running", match_type="contains")
```

#### client.keywords.list_clusters()

List all keyword clusters with representative keywords.

Keywords are grouped into clusters using HDBSCAN clustering based on semantic similarity.
This method returns a summary of all clusters for topic exploration.

Signature:
```python
clusters = client.keywords.list_clusters(
    representative_count=3  # Keywords per cluster (default: 3, max: 10)
)
```

Returns ClusterListResponse with:
- clusters - List of ClusterSummary objects
- noise_count - Number of unclustered keywords (cluster_id=-1)
- total_keywords - Total keywords in the index
- cluster_count - Number of clusters (excluding noise)

Each ClusterSummary has:
- cluster_id - Unique cluster identifier
- size - Number of keywords in cluster
- representative_keywords - Top keywords by volume

Example:
```python
clusters = client.keywords.list_clusters()
for cluster in clusters.clusters:
    print(f"Cluster {cluster.cluster_id}: {cluster.size} keywords")
    print(f"  Topics: {', '.join(cluster.representative_keywords)}")
```

#### client.keywords.get_cluster_keywords()

Get keywords belonging to a specific cluster.

Use this to explore keywords within a cluster. Supports pagination for large clusters.
Use cluster_id=-1 to get noise (unclustered) keywords.

Signature:
```python
result = client.keywords.get_cluster_keywords(
    cluster_id=0,   # Required: cluster ID (-1 for noise)
    limit=50,       # Max keywords (default: 50, max: 1000)
    offset=0        # Pagination offset (default: 0)
)
```

Returns ClusterKeywordsResponse with:
- keywords - List of ClusterKeywordResult objects
- total - Total keywords in this cluster
- cluster_id - The requested cluster ID
- limit - Limit used
- offset - Offset used

Each ClusterKeywordResult has:
- keyword - The keyword text
- cluster_id - Cluster ID
- volume - Monthly search volume (optional)
- difficulty - Difficulty score 0-100 (optional)
- cpc - Cost per click (optional)
- intent - Search intent (optional)

Example:
```python
# Get top keywords from cluster 0
result = client.keywords.get_cluster_keywords(0, limit=20)
for kw in result.keywords:
    print(f"{kw.keyword}: volume={kw.volume}")

# Get noise keywords (unclustered)
noise = client.keywords.get_cluster_keywords(-1, limit=100)
```

### brand

Your brand profile including company info, competitors, industries, and target markets.

#### client.brand.get_profile()

Get the complete brand profile including all company information, competitors, target industries, business models, and geographic scope.

Signature:
```python
profile = client.brand.get_profile()
```

Returns BrandProfileResponse with:
- domain - Domain name
- name - Brand name
- context - Brand context/description
- search_analysis_context - Search analysis context
- domain_authority - Domain authority score
- competitors - List of competitor domains
- industries - List of industries
- business_models - List of business models
- company_size - Company size
- target_customer_segments - Target segments
- geographies - Target geographies
- personas - List of persona strings
- topics - List of Topic objects (value, pillar_page_url, landing_page_url)
- sitemaps - Sitemap URLs
- profile_urls - Profile URLs

Example:
```python
profile = client.brand.get_profile()
print(f"Domain: {profile.domain}")
print(f"Personas: {profile.personas}")
print(f"Topics: {[(t.value, t.pillar_page_url) for t in profile.topics]}")
```

#### client.brand.update_profile()

Update brand profile fields.

**Important behavior:**
- Only provided fields are updated
- Fields set to None/undefined are IGNORED (not cleared)
- To clear a list field, pass an empty list []
- To clear a string field, pass an empty string ""

Signature:
```python
result = client.brand.update_profile(
    name="Project Name",                          # Optional: Project name
    context="Business description...",            # Optional: Business context
    competitors=["competitor1.com"],              # Optional: Competitor domains
    industries=["SaaS", "Technology"],            # Optional: Industries
    business_models=["B2B", "Enterprise"],        # Optional: Business models
    company_size="startup",                       # Optional: solo, early_startup, startup, smb, mid_market, enterprise
    target_customer_segments=["Enterprise"],      # Optional: Target segments
    geographies=["US", "EU"],                     # Optional: Geographic regions
    personas=["Enterprise buyer", "Tech lead"],   # Optional: Persona descriptions
    topics=["SEO", "Content Marketing"]           # Optional: Topic values
)
```

Returns BrandProfileUpdateResponse with:
- success - Boolean indicating success
- profile - Updated BrandProfileResponse with all fields

Examples:
```python
# Update multiple fields at once
result = client.brand.update_profile(
    context="We are a B2B SaaS company...",
    competitors=["competitor1.com", "competitor2.com"],
    personas=["Enterprise buyers", "Technical decision makers"],
    topics=["SEO", "Content Marketing", "Analytics"]
)

# Only update one field (others unchanged)
result = client.brand.update_profile(industries=["SaaS", "MarTech"])

# Clear a list field
result = client.brand.update_profile(competitors=[])
```

### google_cse

Simple, fast Google web searches.

#### client.google_cse.search()

Search Google.

Signature:
```python
results = client.google_cse.search(
    query="best seo tools",
    max_results=10,      # Max results (1-10, default: 10)
    country="us",        # Optional: ISO 3166-1 alpha-2 country code
    language="lang_en"   # Optional: ISO 639-1 with "lang_" prefix
)
```

Returns GoogleCSEResponse with:
- query - The search query
- results - List of GoogleCSEResult objects
- total - Total results count
Each result has: title, url, snippet

### serpapi

Multi-engine search supporting Google and Bing with News, Videos, Images, and Shopping.

#### client.serpapi.search()

Multi-engine search.

Signature:
```python
results = client.serpapi.search(
    query="best seo tools",      # Required: The search query
    engine="google",             # "google" or "bing" (default: "google")
    search_type="web",           # "web", "news", "images", "videos", "shopping" (default: "web")
    device="desktop",            # Optional: "desktop", "tablet", or "mobile"
    country="us",                # Optional: ISO 3166-1 alpha-2 country code
    language="en",               # Optional: ISO 639-1 language code
    max_results=10               # Number of results, 1-100 (default: 10)
)
```

Note: news, images, videos, shopping search types are Google only.

Examples:
```python
# Bing web search
results = client.serpapi.search(query="best seo tools", engine="bing", country="us")

# Google News search
results = client.serpapi.search(query="AI technology", search_type="news", country="us")

# Google Images search
results = client.serpapi.search(query="modern website design", search_type="images")

# Google Shopping search
results = client.serpapi.search(query="wireless headphones", search_type="shopping")
```

Returns SerpApiResponse with:
- query - The search query
- engine - Search engine used
- search_type - Type of search performed
- results - List of SerpApiResult objects
- total - Total results count
- search_metadata - Optional metadata (id, status, total_time_taken)
Each result has: title, url, snippet, displayed_link, date, thumbnail

### files

Upload files to cloud storage (auto-deleted after 365 days)

#### client.files.upload()

Upload files to cloud storage.

Signature:
```python
# Upload from string content (must encode to bytes)
csv_content = "url,title\nhttps://example.com,Example"
result = client.files.upload(
    file_data=csv_content.encode(),  # Must be bytes, not string
    filename="report.csv"
)

# Upload from file path
result = client.files.upload(file_path="/path/to/file.csv")

# Upload with explicit content type
result = client.files.upload(
    file_data=b'{"key": "value"}',
    filename="data.json",
    content_type="application/json"
)
```

Allowed content types: text/plain, text/csv, text/markdown, text/html, text/css, text/javascript, application/json, application/xml

Returns FileUploadResponse with:
- file_id - Unique file identifier
- filename - The filename
- content_type - MIME type
- size - File size in bytes
- url - Download URL

### gsc

Access your Google Search Console data including search analytics, sitemaps, and URL inspection.

#### client.gsc.query()

Query search analytics data.

Signature:
```python
result = client.gsc.query(
    start_date="2024-01-01",
    end_date="2024-01-31",
    dimensions=["query", "page"],  # Optional: "query", "page", "country", "device", "searchAppearance", "date"
    search_type="web",             # Optional: "web", "image", "video", "news", "discover", "googleNews"
    row_limit=100,                 # Optional: max rows (default 1000, max 25000)
    start_row=0,                   # Optional: pagination offset
    aggregation_type="auto",       # Optional: "auto", "byPage", "byProperty"
    data_state="final"             # Optional: "final", "all"
)
```

Returns SearchAnalyticsResponse with:
- rows - List of SearchAnalyticsRow objects
- response_aggregation_type - How data was aggregated
Each row has: keys (list), clicks, impressions, ctr, position

### Pagination Example

The Search Console API returns max 25,000 rows per request. For large datasets:

```python
def get_all_search_analytics(start_date: str, end_date: str, dimensions: list):
    """Fetch all search analytics data with automatic pagination."""
    all_rows = []
    start_row = 0
    row_limit = 25000  # Maximum allowed by the API

    while True:
        result = client.gsc.query(
            start_date=start_date,
            end_date=end_date,
            dimensions=dimensions,
            row_limit=row_limit,
            start_row=start_row,
        )

        all_rows.extend(result.rows)

        # If we got fewer rows than requested, we've reached the end
        if len(result.rows) < row_limit:
            break

        start_row += row_limit

    return all_rows
```

Note: The API has daily quota limits. For large datasets, consider reducing the date range or using fewer dimensions.

#### client.gsc.get_site()

Get site information.

Signature:
```python
site = client.gsc.get_site()
```

Returns SiteInfo with:
- site_url - The site URL
- permission_level - Permission level (e.g., "siteOwner")

#### client.gsc.list_sitemaps()

List all sitemaps.

Signature:
```python
sitemaps = client.gsc.list_sitemaps()
```

Returns SitemapListResponse with:
- sitemap - List of Sitemap objects
Each sitemap has: path, last_submitted, is_pending, is_sitemaps_index, last_downloaded, warnings, errors, contents

#### client.gsc.get_sitemap()

Get specific sitemap details.

Signature:
```python
sitemap = client.gsc.get_sitemap("https://example.com/sitemap.xml")
```

Returns Sitemap with full details.

#### client.gsc.inspect_url()

Inspect a URL's index status.

Signature:
```python
inspection = client.gsc.inspect_url(
    url="https://example.com/page",
    language_code="en-US"  # Optional: for localized results
)
```

Returns UrlInspectionResponse with:
- inspection_result.inspection_result_link - Link to GSC
- inspection_result.index_status_result - Index status details
- inspection_result.mobile_usability_result - Mobile usability
- inspection_result.rich_results_result - Rich results info

### reddit

Access live Reddit data (posts, comments, and subreddit info).

#### client.reddit.get_subreddit_posts()

Fetch posts from a subreddit.

Signature:
```python
posts = client.reddit.get_subreddit_posts(
    subreddit="programming",     # Required: subreddit name (without /r/)
    sort="hot",                  # Optional: "hot", "new", "top", "rising" (default: "hot")
    time_filter="week",          # Optional: "hour", "day", "week", "month", "year", "all" (for "top" sort)
    limit=25,                    # Optional: max posts 1-100 (default: 25)
    after="t3_abc123"            # Optional: pagination cursor from previous response
)
```

Returns SubredditPostsResponse with:
- posts - List of RedditPost objects
- after - Pagination cursor for next page (None if no more results)

Each RedditPost has:
- id - Post ID
- title - Post title
- author - Author username
- subreddit - Subreddit name
- score - Net upvotes
- upvote_ratio - Ratio of upvotes (0-1)
- num_comments - Comment count
- created - Creation datetime
- url - Link URL (or permalink for self posts)
- permalink - Reddit permalink
- is_self - True if text post
- selftext - Post body (for self posts)
- domain - Link domain
- nsfw - True if NSFW
- spoiler - True if marked spoiler
- locked - True if comments locked
- stickied - True if pinned

#### client.reddit.search()

Search Reddit posts.

Signature:
```python
results = client.reddit.search(
    query="python async",        # Required: search query
    subreddit="learnprogramming", # Optional: restrict to subreddit
    sort="relevance",            # Optional: "relevance", "hot", "top", "new", "comments" (default: "relevance")
    time_filter="month",         # Optional: "hour", "day", "week", "month", "year", "all"
    limit=25,                    # Optional: max results 1-100 (default: 25)
    after="t3_abc123"            # Optional: pagination cursor
)
```

Returns RedditSearchResponse with:
- query - The search query
- posts - List of RedditPost objects
- after - Pagination cursor for next page

#### client.reddit.get_post()

Get a single post by ID.

Signature:
```python
post = client.reddit.get_post("abc123")  # Post ID with or without t3_ prefix
```

Returns RedditPost with full post details.

#### client.reddit.get_post_comments()

Get comments on a post.

Signature:
```python
response = client.reddit.get_post_comments(
    post_id="abc123",            # Required: post ID (with or without t3_ prefix)
    sort="best",                 # Optional: "best", "top", "new", "controversial", "old" (default: "best")
    limit=50                     # Optional: max comments 1-500 (default: 50)
)
```

Returns PostCommentsResponse with:
- post - RedditPost object with full post details
- comments - List of RedditComment objects (nested with replies)

Each RedditComment has:
- id - Comment ID
- author - Author username
- body - Comment text
- score - Net upvotes
- created - Creation datetime
- permalink - Reddit permalink
- depth - Nesting depth (0 = top-level)
- is_op - True if author is the post author
- replies - List of nested RedditComment objects

#### client.reddit.get_subreddit_info()

Get subreddit metadata.

Signature:
```python
info = client.reddit.get_subreddit_info("programming")
```

Returns SubredditInfo with:
- name - Subreddit name (display_name)
- title - Subreddit title
- description - Full description (markdown)
- public_description - Short public description
- subscribers - Subscriber count
- active_users - Currently active users
- created - Creation datetime
- nsfw - True if NSFW
- icon_url - Subreddit icon URL (optional)
- banner_url - Banner image URL (optional)

### bing_webmaster

Access Bing Webmaster Tools data including search analytics, page performance, traffic stats, and crawl information.

#### client.bing_webmaster.get_query_stats()

Get search query statistics.

Returns search query analytics including impressions, clicks,
average position, and CTR for queries that triggered your site.

Signature:
```python
result = client.bing_webmaster.get_query_stats()
```

Returns QueryStatsResponse with:
- rows - List of QueryStatsRow objects

Each QueryStatsRow has:
- query - The search query
- impressions - Number of impressions
- clicks - Number of clicks
- avg_position - Average position in search results
- avg_ctr - Average click-through rate
- date - Date of the data (optional)

#### client.bing_webmaster.get_page_stats()

Get page-level statistics.

Returns page-level analytics including impressions, clicks,
average position, and CTR for your pages.

Signature:
```python
result = client.bing_webmaster.get_page_stats()
```

Returns PageStatsResponse with:
- rows - List of PageStatsRow objects

Each PageStatsRow has:
- url - The page URL
- impressions - Number of impressions
- clicks - Number of clicks
- avg_position - Average position in search results
- avg_ctr - Average click-through rate

#### client.bing_webmaster.get_traffic_stats()

Get overall traffic statistics.

Returns aggregate traffic metrics for your site.

Signature:
```python
traffic = client.bing_webmaster.get_traffic_stats()
```

Returns TrafficStats with:
- date - Date of the data (optional)
- impressions - Total impressions
- clicks - Total clicks
- avg_ctr - Average click-through rate
- avg_imp_rank - Average impression rank
- avg_click_position - Average click position

#### client.bing_webmaster.get_crawl_stats()

Get crawl statistics.

Returns crawl metrics including pages crawled, errors,
indexing status, and various crawl issues.

Signature:
```python
crawl = client.bing_webmaster.get_crawl_stats()
```

Returns CrawlStats with:
- date - Date of the data (optional)
- crawled_pages - Number of pages crawled
- crawl_errors - Number of crawl errors
- in_index - Number of pages in the index
- in_links - Number of inbound links
- blocked_by_robots_txt - Pages blocked by robots.txt
- contains_malware - Pages flagged for malware
- http_code_error - Pages with HTTP errors

#### client.bing_webmaster.get_site()

Get site information.

Returns the site URL and verification status for the site
associated with the current project.

Signature:
```python
site = client.bing_webmaster.get_site()
```

Returns SiteInfo with:
- site_url - The site URL
- is_verified - Whether the site is verified
- is_in_index - Whether the site is in the Bing index
