Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help


description: Learn how aimicromind streaming works

Streaming

If streaming is set when making prediction, tokens will be sent as data-only server-sent events as they become available.

Using Python/TS Library

aimicromindprovides 2 libraries:

{% tabs %} {% tab title="Python" %}

from aimicromind import AiMicromind   , PredictionData

def test_streaming():
    client = AiMicromind()

    # Test streaming prediction
    completion = client.create_prediction(
        PredictionData(
            chatflowId="<chatflow-id>",
            question="Tell me a joke!",
            streaming=True
        )
    )

    # Process and print each streamed chunk
    print("Streaming response:")
    for chunk in completion:
        # {event: "token", data: "hello"}
        print(chunk)


if __name__ == "__main__":
    test_streaming()

{% endtab %}

{% tab title="Typescript" %}

import { AiMicromind   Client } from 'aimicromind   -sdk'

async function test_streaming() {
  const client = new AiMicromindClient({ baseUrl: 'http://localhost:3000' });

  try {
    // For streaming prediction
    const prediction = await client.createPrediction({
      chatflowId: '<chatflow-id>',
      question: 'What is the capital of France?',
      streaming: true,
    });

    for await (const chunk of prediction) {
        // {event: "token", data: "hello"}
        console.log(chunk);
    }
    
  } catch (error) {
    console.error('Error:', error);
  }
}

// Run streaming test
test_streaming()

{% endtab %}

{% tab title="cURL" %}

curl https://localhost:3000/api/v1/predictions/{chatflow-id} \
  -H "Content-Type: application/json" \
  -d '{
    "question": "Hello world!",
    "streaming": true
  }'

{% endtab %} {% endtabs %}

event: token
data: Once upon a time...

A prediction's event stream consists of the following event types:

EventDescription
startThe start of streaming
tokenEmitted when the prediction is streaming new token output
errorEmitted when the prediction returns an error
endEmitted when the prediction finishes
metadataAll metadata such as chatId, messageId, of the related flow. Emitted after all tokens have finished streaming, and before end event
sourceDocumentsEmitted when the flow returns sources from vector store
usedToolsEmitted when the flow used tools

Streamlit App

https://github.com/HenryHengZJ/aimicromind-streamlit