
Adharsh worked on enhancing AI observability and instrumentation across multiple agent and LLM workflows in the openllmetry repository. Over six months, he delivered features such as OpenTelemetry-based tracing and metrics for CrewAI, Bedrock, Ollama, Watson, and Google Generative AI integrations. Using Python, he implemented histogram-based metrics, prompt content extraction, and unified trace attribution, enabling detailed monitoring of token usage, operation durations, and prompt fidelity. His approach focused on backend development and metrics instrumentation, establishing end-to-end visibility and data-driven performance insights. The work provided a robust foundation for incident diagnosis, SLA monitoring, and cost analysis in production AI systems.
Month: 2026-01. Key features delivered: Google Generative AI Instrumentation Metrics added to traceloop/openllmetry, enabling token usage tracking and operation duration monitoring with input/output token histograms and duration histograms. Major bugs fixed: None reported this month. Overall impact: Improved observability and performance monitoring for Google GenAI workloads, enabling better cost visibility and faster issue diagnosis. Technologies/skills demonstrated: instrumentation design, metrics collection, histogram-based observability, PR-driven delivery, and strong traceability.
Month: 2026-01. Key features delivered: Google Generative AI Instrumentation Metrics added to traceloop/openllmetry, enabling token usage tracking and operation duration monitoring with input/output token histograms and duration histograms. Major bugs fixed: None reported this month. Overall impact: Improved observability and performance monitoring for Google GenAI workloads, enabling better cost visibility and faster issue diagnosis. Technologies/skills demonstrated: instrumentation design, metrics collection, histogram-based observability, PR-driven delivery, and strong traceability.
Month: 2025-10 — Traceloop/openllmetry. Focused on enhancing observability for Watson instrumentation and ensuring reliable prompt data capture across spans. Delivered targeted instrumentation improvements and a refactor to improve accuracy of prompt details extracted from LLM inputs.
Month: 2025-10 — Traceloop/openllmetry. Focused on enhancing observability for Watson instrumentation and ensuring reliable prompt data capture across spans. Delivered targeted instrumentation improvements and a refactor to improve accuracy of prompt details extracted from LLM inputs.
July 2025: Delivered OpenTelemetry instrumentation for AI agent workflows in traceloop/openllmetry, establishing end-to-end observability for OpenAI Agent runs, CrewAI tool usage, and model interactions. Implemented new Python instrumentation packages/files with test coverage to validate observability across agent workflows. Fixed llm span attribute logic to ensure accurate trace data and better span fidelity. This work lays the foundation for dashboards, SLA monitoring, and faster incident response, enabling data-driven improvements and reliability across the automation stack.
July 2025: Delivered OpenTelemetry instrumentation for AI agent workflows in traceloop/openllmetry, establishing end-to-end observability for OpenAI Agent runs, CrewAI tool usage, and model interactions. Implemented new Python instrumentation packages/files with test coverage to validate observability across agent workflows. Fixed llm span attribute logic to ensure accurate trace data and better span fidelity. This work lays the foundation for dashboards, SLA monitoring, and faster incident response, enabling data-driven improvements and reliability across the automation stack.
March 2025: Delivered unified observability enhancements for Bedrock and Ollama telemetry in openllmetry. Bedrock instrumentation now collects traces and metrics from imported models with improved model ID attribution; Ollama instrumentation adds token usage and operation duration histograms for detailed performance monitoring (collection-enabled).
March 2025: Delivered unified observability enhancements for Bedrock and Ollama telemetry in openllmetry. Bedrock instrumentation now collects traces and metrics from imported models with improved model ID attribution; Ollama instrumentation adds token usage and operation duration histograms for detailed performance monitoring (collection-enabled).
February 2025 monthly summary for Shubhamsaboo/openllmetry focused on improving LLM observability and operational insight. Delivered instrumentation enhancements for CrewAI and Groq that enable data-driven performance optimization and faster incident response, with a concrete fix to ensure metric accuracy.
February 2025 monthly summary for Shubhamsaboo/openllmetry focused on improving LLM observability and operational insight. Delivered instrumentation enhancements for CrewAI and Groq that enable data-driven performance optimization and faster incident response, with a concrete fix to ensure metric accuracy.
January 2025: Delivered CrewAI Observability with OpenTelemetry instrumentation in Shubhamsaboo/openllmetry. Implemented initial tracing across agents, tasks, crews, and LLM calls within CrewAI workflows, establishing end-to-end visibility, enabling faster incident diagnosis, and laying groundwork for metrics/logs integration.
January 2025: Delivered CrewAI Observability with OpenTelemetry instrumentation in Shubhamsaboo/openllmetry. Implemented initial tracing across agents, tasks, crews, and LLM calls within CrewAI workflows, establishing end-to-end visibility, enabling faster incident diagnosis, and laying groundwork for metrics/logs integration.

Overview of all repositories you've contributed to across your timeline