Ylem documentation
  • 🗒️General information
    • Introduction to Ylem
    • Quick start guide
    • Release notes
  • 🔬Open-source edition
    • Installation
    • Usage of Apache Kafka
    • Task processing architecture
    • Configuring integrations with .env variables
  • 💡Integrations
    • Connecting an integration
    • Library of integrations
      • Amazon Redshift
      • Apache Kafka
      • APIs
      • Atlassian Jira
      • AWS Lambda
      • AWS RDS
      • AWS S3
      • ClickHouse
      • ElasticSearch
      • E-mail
      • Google Big Query
      • Google Cloud SQL
      • Google Pub/Sub
      • Google Sheets
      • Immuta
      • Incident.io
      • Jenkins
      • Hubspot
      • Microsoft Azure SQL
      • MySQL
      • OpenAI ChatGPT
      • Opsgenie
      • PostgreSQL
      • PlanetScale
      • RabbitMQ
      • Salesforce
      • Slack
      • Snowflake
      • Tableau
      • Twilio. SMS
      • WhatsApp (through Twilio)
    • Initial demo data source
  • 🚡Pipelines
    • Pipeline management
    • Tasks
      • Aggregator
      • API Call
      • Code
      • Condition
      • External trigger
      • Filter
      • For each
      • GPT
      • Merge
      • Notification
      • Query
      • Pipeline runner
      • Processor
      • Transformer
    • Running and scheduling pipelines
    • Library of templates
    • Environment variables
    • Mathematical functions and operations
    • Formatting of messages
  • 📈Statistics and profiling
    • Statistics of runs
    • Slow tasks
  • 📊Metrics
    • Metric management
    • Using previous values of a metric
  • 💼Use cases, patterns, templates, examples
    • Use cases
    • Messaging patterns
      • Datatype Channel
      • Message Dispatcher
      • Messaging Bridge
      • Message Bus
      • Message Filter
      • Message Router
      • Point-to-Point Channel
      • Publish-Subscribe Channel
      • Pull-Push
    • Functional use cases
      • Streaming from Apache Kafka and messaging queues
      • Streaming from APIs
      • Streaming from databases
      • Data orchestration, transformation and processing
      • Usage of Python and Pandas
      • KPI Monitoring
      • OKRs and custom metrics
      • Data Issues & Incidents
      • Reporting
      • Other functional use cases
    • Industry-specific use cases
      • Finance and Payments
      • E-commerce & Logistics
      • Customer Success
      • Security, Risk, and Anti-Fraud
      • Anti-Money Laundering (AML)
  • 🔌API
    • OAuth clients
    • API Reference
  • 👁️‍🗨️Other resources
    • FAQ
    • Our blog on Medium
Powered by GitBook
On this page

Was this helpful?

Edit on GitHub
  1. Use cases, patterns, templates, examples
  2. Messaging patterns

Publish-Subscribe Channel

PreviousPoint-to-Point ChannelNextPull-Push

Last updated 8 months ago

Was this helpful?

The main difference between a and Publish-Subscribe messaging model is that a message is not sent to a specific receiver but to a specific topic, and all the subscribed receivers receive the message.

If you have a specific message producer (, , , ), that delivers this message to a specific Ylem's pipeline by calling its , this pipeline acts as a topic that has a specific distribution logic responsible for forwarding the message to subscribed pipelines (receivers).

The pipeline itself needs to begin with the "" task and can contain a set of conditions that define what pipelines to forward this message to.

As we can see from this pipeline, there are two subscription condition checks. And how to define a condition itself is up to you. We recommend having a standard message type in your message schema that will always be present in all the messages and then the condition will check if that matches a certain value.

For example:

Then after the condition is checked if the first one is true, it forwards the data to "pipeline 1", which is subscribed to that type of message.

If the second one is true, it forwards the message to "pipeline 2" and "pipeline 3", which is how you subscribe multiple pipelines to the same message.

As you can see this pipeline is not aware of what data the message contains and purely acts as an intermediary distributor to the subscribed pipelines that contain the logic itself. We recommend you always keep this way.

Or to make it even more advanced, you can create an for this message type, that can be reused for multiple tasks and pipelines:

💼
environment variable
Point-to-Point
Apache Kafka
RabbitMQ
AWS Lambda
Google Pub/Sub
External trigger
API Endpoint
Publish-subscribe implementation using Ylem