Metadata-Version: 2.4
Name: otel-utils
Version: 1.2.0
Summary: Utilidades simplificadas para instrumentación con OpenTelemetry
License: Proprietary
Author: Harold Portocarrero
Author-email: harold@getcometa.com
Requires-Python: >=3.8
Classifier: License :: Other/Proprietary License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Requires-Dist: opentelemetry-api (>=1.33.0)
Requires-Dist: opentelemetry-exporter-otlp (>=1.33.0)
Requires-Dist: opentelemetry-instrumentation (>=0.54b0)
Requires-Dist: opentelemetry-instrumentation-logging (>=0.54b0)
Requires-Dist: opentelemetry-sdk (>=1.33.0)
Requires-Dist: typing-extensions (>=4.12.2)
Project-URL: Repository, https://github.com/getcometa/otel-utils
Description-Content-Type: text/markdown

# OpenTelemetry Utils

A Python library designed to simplify application instrumentation using OpenTelemetry. This library provides an abstraction layer that makes instrumentation more intuitive and less intrusive in your business logic.

## Features

- Simplified OpenTelemetry configuration
- Intuitive API for distributed tracing
- Utilities for metrics and structured logging
- OpenTelemetry Collector integration
- Complete context propagation support
- Full compatibility with asynchronous applications

## Installation

```bash
pip install otel-utils
```

## Basic Usage

### Initial Configuration
```python
from otel_utils import OtelConfig, OtelConfigurator

config = OtelConfig(
    service_name="my-service",
    environment="production",
    otlp_endpoint="http://localhost:4318",  # Optional
    protocol="http",                        # "http" or "grpc", default "grpc"
    trace_sample_rate=1.0,                  # Sampling rate, default 1.0
    metric_export_interval_ms=30000,        # Metrics export interval
    log_level=logging.INFO,                 # Logging level
    enable_console_logging=True,            # Enable console logging
    json_logging=True,                      # Use JSON format for logs (default: True)
    additional_resources={                   # Optional additional resources
        "deployment.region": "us-east-1",
        "team.name": "backend"
    }
)

OtelConfigurator(config)
```

### Tracing
```python
from otel_utils import Tracer

tracer = Tracer("my-service")

# Using the decorator (auto-handles errors and status)
@tracer.trace("my_operation")
async def my_function():
    # Your code here
    pass

# Using the context manager for custom spans
with tracer.create_span("my_operation") as span:
    span.set_attribute("key", "value")
    # Your code here

# Using start_as_current_span for context propagation
with tracer.start_as_current_span("parent_span") as span:
    # Operations here will be associated with parent_span
    pass
```

### Metrics
```python
from otel_utils import Metrics

metrics = Metrics("my-service")

# Simple counter
counter = metrics.get_counter("requests_total")
counter.add(1, {"endpoint": "/api/v1/resource"})

# Histogram for latencies using a context manager
with metrics.measure_duration("request_duration", attributes={"method": "GET"}):
    # Your code here
    pass

# Asynchronous metrics (Observable Gauge)
def get_system_load():
    return 0.5  # Calculated value

metrics.observe_async(
    name="system_load",
    description="Current system load",
    callback=get_system_load
)
```

### Structured Logging
The library provides a `StructuredLogger` that outputs logs in JSON format (when `json_logging` is enabled) and automatically includes tracing context.

```python
from otel_utils import StructuredLogger

logger = StructuredLogger("my-service", environment="production")

# Using operation_context to auto-log start/end/errors
with logger.operation_context("process_order", order_id="123", customer_type="vip"):
    logger.info("Starting processing")
    # If an exception is raised here, it will be logged with status="failed"
    # and the error message will be included in the JSON log.

# Manual logging with extra context
logger.info("Event occurred", action="login", user="user123")
```

## OpenTelemetry Collector Integration

This library is designed to work seamlessly with the OpenTelemetry Collector. Telemetry data is sent using the OTLP protocol, which is the OpenTelemetry standard.

### Collector Configuration with HTTP
```yaml
receivers:
  otlp:
    protocols:
      http:
        endpoint: 0.0.0.0:4318

exporters:
  # configure your exporters here

service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [your-exporter]
```

### Collector Configuration with gRPC
```yaml
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317

exporters:
  # configure your exporters here

service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [your-exporter]
```

## Best Practices

### Separation of Concerns
Keep instrumentation separate from business logic by creating domain-specific abstractions. Your business code should remain clean and focused on its primary responsibilities.

### Consistent Naming
Use coherent naming conventions for spans, metrics, and logs across your services. This makes it easier to correlate and analyze telemetry data.

### Relevant Context
Include useful contextual information in spans and logs, but be mindful of sensitive data. Focus on information that aids debugging and monitoring.

### Appropriate Granularity
Don't instrument everything. Focus on significant operations that provide value for monitoring and debugging. Consider the overhead and noise ratio when adding instrumentation.

## Development

To set up the development environment:
    
```bash
# Create virtualenv
python -m venv venv
source venv/bin/activate  # or `venv\Scripts\activate` on Windows

# Install development dependencies
pip install -e ".[dev]"

# Run tests
pytest
```

## Contributing

1. Create a feature branch (`git checkout -b feature/new-feature`)
2. Commit your changes (`git commit -am 'Add new feature'`)
3. Push to the branch (`git push origin feature/new-feature`)
4. Create a Pull Request
