Pipecat welcomes community-maintained integrations! As our ecosystem grows, we've established a process for any developer to create and maintain their own service integrations while ensuring discoverability for the Pipecat community.
What we support: Community-maintained integrations that live in separate repositories and are maintained by their authors.
What we don't do: The Pipecat team does not code review, test, or maintain community integrations. We provide guidance and list approved integrations for discoverability.
Why this approach: This allows the community to move quickly while keeping the Pipecat core team focused on maintaining the framework itself.
To be listed as an official community integration, follow these steps:
Create your integration following the patterns and examples shown in the "Integration Patterns and Examples" section below.
Your repository must contain these components:
-
Source code - Complete implementation following Pipecat patterns
-
Foundational example - Single file example showing basic usage (see Pipecat examples)
-
README.md - Must include:
- Introduction and explanation of your integration
- Installation instructions
- Usage instructions with Pipecat Pipeline
- How to run your example
- Pipecat version compatibility (e.g., "Tested with Pipecat v0.0.86")
- Company attribution: If you work for the company providing the service, please mention this in your README. This helps build confidence that the integration will be actively maintained.
-
LICENSE - Permissive license (BSD-2 like Pipecat, or equivalent open source terms)
-
Code documentation - Source code with docstrings (we recommend following Pipecat's docstring conventions)
-
Changelog - Maintain a changelog for version updates
Join our Discord: https://discord.gg/pipecat
Submit a pull request to add your integration to our Community Integrations documentation page.
To submit:
- Fork the Pipecat docs repository
- Edit the file
server/services/community-integrations.mdx - Add your integration to the appropriate service category table with:
- Service name
- Link to your repository
- Maintainer GitHub username(s)
- Include a link to your demo video (approx 30-60 seconds) in your PR description showing:
- Core functionality of your integration
- Handling of an interruption (if applicable to service type)
- Submit your pull request
Once your PR is submitted, post in the #community-integrations Discord channel to let us know.
Base class: WebsocketSTTService
Use for: Services where you manage the websocket connection directly. Combines STTService with WebsocketService for automatic reconnection and keepalive support.
Examples:
Base class: STTService
Use for: Streaming services where the provider's Python SDK manages the connection internally.
Examples:
Base class: SegmentedSTTService
Examples:
- STT services should push
InterimTranscriptionFramesandTranscriptionFrames - If confidence values are available, filter for values >50% confidence
Base class: OpenAILLMService
Examples:
- AzureLLMService
- GrokLLMService - Shows overriding the base class where needed
Requires: Full implementation
Examples:
-
_process_context(self, context: LLMContext)— The main method that processes an LLM context and generates a response. Each LLM service overridesprocess_frameto extract context fromLLMContextFrameand calls_process_context. -
adapter_class— Class attribute pointing to aBaseLLMAdaptersubclass. Defaults toOpenAILLMAdapter. Non-OpenAI services must implement their own adapter (seesrc/pipecat/adapters/base_llm_adapter.py) with methods:get_llm_invocation_params(context)— Extract provider-specific params from universal contextto_provider_tools_format(tools_schema)— Convert standard tools to provider formatget_messages_for_logging(context)— Format messages for logging- Reference adapters:
src/pipecat/adapters/services/(anthropic, gemini, bedrock, etc.)
-
Frame sequence: Output must follow this frame sequence pattern:
LLMFullResponseStartFrame— Signals the start of an LLM responseLLMTextFrame— Contains LLM content, typically streamed as tokensLLMFullResponseEndFrame— Signals the end of an LLM response
-
Thought frames (reasoning models): If the model supports extended thinking / chain-of-thought, emit thought frames alongside the response:
LLMThoughtStartFrame— Signals the start of a thoughtLLMThoughtTextFrame— Contains thought content, streamed as tokensLLMThoughtEndFrame— Signals the end of a thought
-
Context aggregation is handled by the framework via
LLMContext+LLMContextAggregatorPair. The LLM service just processes context it receives — no need to implement aggregators.
Use for: Websocket-based streaming services (with or without word timestamps)
Examples:
Use for: Websocket-based services without word timestamps that reconnect on interruption (e.g. don't support a context ID or interruption message)
Example:
Use for: HTTP-based services (word timestamps are supported in the base class)
Examples:
- For websocket services, use asyncio WebSocket implementation
- Handle idle service timeouts with keepalives
- TTS services push both audio (
TTSAudioRawFrame) and text (TTSTextFrame) frames
Pipecat supports telephony provider integration using websocket connections to exchange MediaStreams. These services use a FrameSerializer to serialize and deserialize inputs from the FastAPIWebsocketTransport.
Examples:
- Include hang-up functionality using the provider's native API, ideally using
aiohttp - Support DTMF (dual-tone multi-frequency) events if the provider supports them:
- Deserialize DTMF events from the provider's protocol to
InputDTMFFrame - Use
KeypadEntryenum for valid keypad entries (0-9, *, #, A-D) - Handle invalid DTMF digits gracefully by returning
None
- Deserialize DTMF events from the provider's protocol to
Base class: ImageGenService
Examples:
- Must implement
run_image_genmethod returning anAsyncGenerator
Vision services process images and provide analysis such as descriptions, object detection, or visual question answering.
Base class: VisionService
Example:
- Must implement
run_visionmethod that takes aUserImageRawFrameand returns anAsyncGenerator[Frame, None] - The method processes the image frame and yields frames with analysis results
- Must yield the frame sequence:
VisionFullResponseStartFrame,VisionTextFrame,VisionFullResponseEndFrame
Use the pipecat-{vendor} naming convention for your PyPI package and repository:
pipecat-{vendor}— for single-service integrations (e.g.,pipecat-deepdub)pipecat-{vendor}-{type}— when a vendor offers multiple service types (e.g.,pipecat-upliftai-stt,pipecat-upliftai-tts)
This convention makes community packages easily discoverable via PyPI search and clearly identifies them as part of the Pipecat ecosystem.
- STT:
VendorSTTService - LLM:
VendorLLMService - TTS:
- Websocket:
VendorTTSService - HTTP:
VendorHttpTTSService
- Websocket:
- Image:
VendorImageGenService - Vision:
VendorVisionService - Telephony:
VendorFrameSerializer
Enable metrics in your service:
def can_generate_metrics(self) -> bool:
"""Check if this service can generate processing metrics.
Returns:
True, as this service supports metrics.
"""
return TrueEvery AI service (STT, LLM, TTS, image generation, etc.) exposes a Settings dataclass that serves two roles:
- Store mode — the service's
self._settingsholds the current value of every runtime-updatable field. - Delta mode — an update frame (e.g.
TTSUpdateSettingsFrame) specifies only the fields that should change; unspecified fields remainNOT_GIVEN.
Extend STTSettings, TTSSettings, LLMSettings, or ImageGenSettings (or, if your service directly subclasses AIService, ServiceSettings). The base classes already provide common fields (e.g. model, voice, language). You only need to add service-specific knobs that should be runtime-updatable:
from dataclasses import dataclass, field
from pipecat.services.settings import TTSSettings, NOT_GIVEN
@dataclass
class MyTTSSettings(TTSSettings):
"""Settings for MyTTS service.
Parameters:
speaking_rate: Speed multiplier (0.5–2.0).
"""
speaking_rate: float | None = field(default_factory=lambda: NOT_GIVEN)What goes in Settings vs. __init__ params:
| Belongs in Settings | Stays as __init__ params |
|---|---|
| Model name, voice, language | API keys, auth tokens |
| Service-specific tuning knobs (rate, pitch, temperature) | Base URLs, endpoint overrides |
| Anything users may want to change mid-session | Audio encoding, sample format |
| Connection parameters (timeouts, retries) |
The rule of thumb: if a caller might send an update frame to change it at runtime, it belongs in Settings. Everything else is init-only config stored as self._xxx.
Accept an optional settings parameter. Build a default_settings object with all fields set to real values, then merge any caller overrides with apply_update.
Add a Settings class attribute that points to your settings dataclass. This lets callers access the settings class through the service itself (e.g. MyTTSService.Settings(...)) without a separate import:
from typing import Optional
class MyTTSService(TTSService):
Settings = MyTTSSettings
_settings: Settings
def __init__(
self,
*,
api_key: str,
settings: Optional[Settings] = None,
**kwargs,
):
# 1. Defaults — every field has a real value (store mode).
default_settings = self.Settings(
model="my-model-v1",
voice="default-voice",
language="en",
speaking_rate=1.0,
)
# 2. Merge caller overrides (only given fields win).
if settings is not None:
default_settings.apply_update(settings)
# 3. Pass the fully-populated settings to the base class.
super().__init__(settings=default_settings, **kwargs)
# 4. Init-only config stored separately.
self._api_key = api_keyThis pattern lets callers override only what they care about:
# Uses all defaults
svc = MyTTSService(api_key="sk-xxx")
# Overrides just the voice — access Settings through the service class
svc = MyTTSService(
api_key="sk-xxx",
settings=MyTTSService.Settings(voice="custom-voice"),
)AI services support runtime configuration changes via *UpdateSettingsFrames (e.g. STTUpdateSettingsFrame, TTSUpdateSettingsFrame, LLMUpdateSettingsFrame).
To react to runtime setting changes, override _update_settings. The base implementation applies the delta to self._settings and returns a dict mapping each changed field name to its pre-update value. Your override should call super() first, then act on the changed fields. A common implementation might look like:
async def _update_settings(self, update: TTSSettings) -> dict[str, Any]:
"""Apply a settings update, reconfiguring the connection if needed."""
changed = await super()._update_settings(update)
if not changed:
return changed
await self._disconnect()
await self._connect()
return changedThe dict keys work like a set for membership tests ("language" in changed) and truthiness (if changed). Use changed.keys() - {"language"} for set difference, or changed["language"] to inspect the previous value of a field.
Note that, in this example, the service requires a reconnect to apply the new language. Consider, for each setting, whether your service requires reconnection or can apply changes in-place.
If your service can't yet apply certain settings at runtime, call self._warn_unhandled_updated_settings(changed) with any unhandled field names so users get a clear log message:
async def _update_settings(self, update: TTSSettings) -> dict[str, Any]:
changed = await super()._update_settings(update)
if not changed:
return changed
if "language" in changed:
await self._update_language()
else:
# TODO: this should be temporary - handle changes to other settings soon!
self._warn_unhandled_updated_settings(changed.keys() - {"language"})
return changedSample rates are set via PipelineParams and passed to each frame processor at initialization. The pattern is to not set the sample rate value in the constructor of a given service. Instead, use the start() method to initialize sample rates from the frame:
async def start(self, frame: StartFrame):
"""Start the service."""
await super().start(frame)
self._settings.output_sample_rate = self.sample_rate
await self._connect()Note that self.sample_rate is a @property set in the TTSService base class, which provides access to the private sample rate value obtained from the StartFrame.
Use Pipecat's tracing decorators:
- STT:
@traced_stt- decorate_handle_transcription(self, transcript, is_final, language)(the standard method name convention) - LLM:
@traced_llm- decorate the_process_context()method - TTS:
@traced_tts- decorate therun_tts()method
- Name your package
pipecat-{vendor}(see Naming Conventions) - Use uv for packaging (encouraged)
- Publish to PyPI for easier installation
- Follow semantic versioning principles
- Maintain a changelog
For REST-based communication, use aiohttp. Pipecat includes this as a required dependency, so using it prevents adding an additional dependency to your integration.
- Wrap API calls in appropriate try/catch blocks
- Handle rate limits and network failures gracefully
- Provide meaningful error messages
- When errors occur, raise exceptions AND push errors to notify the pipeline:
try:
# Your API call
result = await self._make_api_call()
except Exception as e:
# Push error upstream to notify the pipeline
await self.push_error(f"{self} error: {e}", exception=e)
# Raise or handle as appropriate
raise- Your foundational example serves as a valuable integration-level test
- Unit tests are nice to have. As the Pipecat teams provides better guidance, we will encourage unit testing more
Community integrations are community-maintained and not officially supported by the Pipecat team. Users should evaluate these integrations independently. The Pipecat team reserves the right to remove listings that become unmaintained or problematic.
Pipecat evolves rapidly to support the latest AI technologies and patterns. While we strive to minimize breaking changes, they do occur as the framework matures.
We strongly recommend:
- Join our Discord at https://discord.gg/pipecat and monitor the
#announcementschannel for release notifications - Follow our changelog: https://github.com/pipecat-ai/pipecat/blob/main/CHANGELOG.md
- Test your integration against new Pipecat releases promptly
- Update your README with the last tested Pipecat version
This helps ensure your integration remains compatible and your users have clear expectations about version support.
Join our Discord community at https://discord.gg/pipecat and post in the #community-integrations channel for guidance and support.
For additional questions, you can also reach out to us at pipecat-ai@daily.co.