Text Generation
Use ChatClient for AI text generation and conversations
Text Generation
ChatClient is the core class for AI text generation in the PlayKit SDK. It provides flexible methods for text completion, streaming responses, tool calling, and structured output generation.
Create ChatClient
import { PlayKitSDK } from 'playkit-sdk';
const sdk = new PlayKitSDK({
gameId: 'your-game-id',
developerToken: 'your-token'
});
await sdk.initialize();
// Use default model (gpt-4o-mini)
const chat = sdk.createChatClient();
// Specify model
const chat = sdk.createChatClient('gpt-4o');Simple Chat
The simplest way to interact with AI.
Basic Q&A
const response = await chat.chat('What is quantum computing?');
console.log(response);With System Prompt
const response = await chat.chat(
'How should I handle this challenge?',
'You are a wise game guide who speaks in riddles.'
);System Prompt Uses:
- Define AI's role and personality
- Set response style and format
- Provide background information and rules
Streaming Responses
Display AI-generated text in real-time instead of waiting for the complete response.
Basic Streaming
await chat.chatStream(
'Tell me a long story about a brave knight',
// onChunk: called for each text fragment
(chunk) => {
process.stdout.write(chunk);
},
// onComplete: called when finished (optional)
(fullText) => {
console.log('\nDone! Total length:', fullText.length);
}
);Streaming in Web Pages
<div id="ai-response"></div>
<script>
const responseDiv = document.getElementById('ai-response');
await chat.chatStream(
'Tell me about the universe',
(chunk) => {
responseDiv.textContent += chunk;
responseDiv.scrollTop = responseDiv.scrollHeight;
}
);
</script>Streaming with System Prompt
await chat.chatStream(
'Explain gravity',
(chunk) => display(chunk),
(fullText) => console.log('Complete'),
'You are a physics teacher. Keep explanations simple.'
);Full Configuration Text Generation
For scenarios requiring more control, use the textGeneration method.
Basic Usage
const result = await chat.textGeneration({
messages: [
{ role: 'system', content: 'You are a helpful assistant' },
{ role: 'user', content: 'What is the capital of France?' }
]
});
console.log('Response:', result.content);
console.log('Model:', result.model);
console.log('Finish reason:', result.finishReason);Response Object
interface ChatResult {
content: string; // Generated text
model: string; // Model used
finishReason: string; // 'stop' | 'length' | 'content_filter' | 'tool_calls'
usage?: {
promptTokens: number;
completionTokens: number;
totalTokens: number;
};
id?: string; // Completion ID
created?: number; // Timestamp
tool_calls?: ToolCall[]; // If tools were called
}Multi-turn Conversation
Manually manage conversation history:
const messages = [
{ role: 'system', content: 'You are a game master' }
];
// Round 1
messages.push({ role: 'user', content: 'What is my quest?' });
const result1 = await chat.textGeneration({ messages });
messages.push({ role: 'assistant', content: result1.content });
// Round 2
messages.push({ role: 'user', content: 'How do I start?' });
const result2 = await chat.textGeneration({ messages });
messages.push({ role: 'assistant', content: result2.content });
// Round 3
messages.push({ role: 'user', content: 'What weapons do I need?' });
const result3 = await chat.textGeneration({ messages });For NPC dialogue with automatic history management, use NPCClient instead. See NPC Conversations.
Streaming with Full Configuration
await chat.textGenerationStream({
messages: [
{ role: 'system', content: 'You are helpful' },
{ role: 'user', content: 'Explain AI' }
],
temperature: 0.7,
maxTokens: 500,
onChunk: (chunk) => {
process.stdout.write(chunk);
},
onComplete: (fullText) => {
console.log('\nDone! Length:', fullText.length);
},
onError: (error) => {
console.error('Error:', error);
}
});Configuration Options
Temperature
Controls output randomness and creativity (0.0 - 2.0):
// Low temperature - more deterministic, conservative
const formal = await chat.textGeneration({
messages: [{ role: 'user', content: 'Explain gravity' }],
temperature: 0.2
});
// High temperature - more random, creative
const creative = await chat.textGeneration({
messages: [{ role: 'user', content: 'Write a sci-fi story' }],
temperature: 1.2
});| Temperature | Use Case |
|---|---|
| 0.0 - 0.3 | Factual Q&A, code generation |
| 0.4 - 0.7 | General conversation, game NPCs |
| 0.8 - 1.2 | Creative writing, brainstorming |
| 1.3 - 2.0 | Highly creative, experimental |
Max Tokens
Limit maximum generation length:
const result = await chat.textGeneration({
messages: [{ role: 'user', content: 'Summarize quantum physics' }],
maxTokens: 100 // Approximately 75 English words
});1 token ≈ 0.75 English words ≈ 0.5 Chinese characters. Setting too small may truncate responses.
Seed
Use the same seed for reproducible results:
const result1 = await chat.textGeneration({
messages: [{ role: 'user', content: 'Generate a random name' }],
seed: 42,
temperature: 0.7
});
const result2 = await chat.textGeneration({
messages: [{ role: 'user', content: 'Generate a random name' }],
seed: 42,
temperature: 0.7
});
// result1.content === result2.content (in most cases)Uses:
- Debugging and testing
- Reproducible game content
- A/B testing
Stop Sequences
Specify sequences that stop generation:
const result = await chat.textGeneration({
messages: [{ role: 'user', content: 'Count to 10' }],
stop: ['5', '6'] // Stop when encountering 5 or 6
});Top-P Sampling
Controls output diversity (0.0 - 1.0):
const result = await chat.textGeneration({
messages: [{ role: 'user', content: 'Write a poem' }],
topP: 0.9 // Only consider top 90% probability words
});Complete Configuration Example
const result = await chat.textGeneration({
messages: [
{ role: 'system', content: 'You are a medieval bard' },
{ role: 'user', content: 'Tell me a tale' }
],
model: 'gpt-4o',
temperature: 0.8,
maxTokens: 500,
seed: 123,
stop: ['\n\n', '---'],
topP: 0.95
});
console.log('Content:', result.content);
console.log('Model:', result.model);
console.log('Finish reason:', result.finishReason);
if (result.usage) {
console.log('Prompt tokens:', result.usage.promptTokens);
console.log('Completion tokens:', result.usage.completionTokens);
console.log('Total tokens:', result.usage.totalTokens);
}Tool Calling
Let the AI call functions to interact with your game.
Define Tools
const tools = [
{
type: 'function',
function: {
name: 'getWeather',
description: 'Get the current weather for a location',
parameters: {
type: 'object',
properties: {
location: { type: 'string', description: 'City name' }
},
required: ['location']
}
}
},
{
type: 'function',
function: {
name: 'giveItem',
description: 'Give an item to the player',
parameters: {
type: 'object',
properties: {
itemName: { type: 'string' },
quantity: { type: 'number' }
},
required: ['itemName', 'quantity']
}
}
}
];Call with Tools
const result = await chat.textGenerationWithTools({
messages: [
{ role: 'user', content: 'Give me 5 health potions' }
],
tools
});
if (result.tool_calls && result.tool_calls.length > 0) {
for (const toolCall of result.tool_calls) {
console.log('Tool:', toolCall.function.name);
const args = JSON.parse(toolCall.function.arguments);
console.log('Args:', args);
// Execute in your game
if (toolCall.function.name === 'giveItem') {
givePlayerItem(args.itemName, args.quantity);
}
}
}Tool Choice Options
// Auto: model decides whether to use tools (default)
const result1 = await chat.textGenerationWithTools({
messages,
tools,
tool_choice: 'auto'
});
// Required: model must use a tool
const result2 = await chat.textGenerationWithTools({
messages,
tools,
tool_choice: 'required'
});
// None: model cannot use tools
const result3 = await chat.textGenerationWithTools({
messages,
tools,
tool_choice: 'none'
});
// Specific: force a specific tool
const result4 = await chat.textGenerationWithTools({
messages,
tools,
tool_choice: {
type: 'function',
function: { name: 'giveItem' }
}
});Streaming with Tools
await chat.textGenerationWithToolsStream({
messages: [{ role: 'user', content: 'Check the weather in Tokyo' }],
tools,
onChunk: (chunk) => {
process.stdout.write(chunk);
},
onComplete: (result) => {
if (result.tool_calls) {
console.log('\nTool calls:', result.tool_calls);
}
},
onError: (error) => {
console.error('Error:', error);
}
});Multi-turn Tool Usage
const messages = [
{ role: 'user', content: 'What is the weather in Paris?' }
];
// First call - AI requests tool
const result1 = await chat.textGenerationWithTools({ messages, tools });
if (result1.tool_calls) {
// Add assistant message with tool calls
messages.push({
role: 'assistant',
content: result1.content,
tool_calls: result1.tool_calls
});
// Execute tool and add result
for (const call of result1.tool_calls) {
const weatherData = await getWeather(JSON.parse(call.function.arguments));
messages.push({
role: 'tool',
tool_call_id: call.id,
content: JSON.stringify(weatherData)
});
}
// Second call - AI uses tool result
const result2 = await chat.textGeneration({ messages });
console.log('Final answer:', result2.content);
}Multimodal Messages
Send images and audio along with text.
Image Input
import { createMultimodalMessage } from 'playkit-sdk';
// From URL
const message = createMultimodalMessage(
'user',
'What is in this image?',
[{ url: 'https://example.com/image.jpg', detail: 'high' }]
);
const result = await chat.textGeneration({
messages: [message]
});
// From base64
const base64Image = 'data:image/jpeg;base64,/9j/4AAQSkZJRg...';
const message2 = createMultimodalMessage(
'user',
'Describe this scene',
[{ url: base64Image, detail: 'auto' }]
);Image Detail Levels
| Detail | Description |
|---|---|
auto | Automatically choose based on image size |
low | Lower resolution, faster processing |
high | Higher resolution, more detailed analysis |
Audio Input
const message = createMultimodalMessage(
'user',
'Transcribe this audio',
[], // No images
[{ data: base64AudioData, format: 'wav' }]
);
const result = await chat.textGeneration({
messages: [message]
});Supported Audio Formats
wavmp3webmflacogg
Combined Multimodal
const message = createMultimodalMessage(
'user',
'What is happening in this image and audio?',
[{ url: 'https://example.com/photo.jpg' }],
[{ data: audioBase64, format: 'mp3' }]
);Structured Output
Generate type-safe JSON data using schemas.
Using Schema Library
// Add schema to library
const schemaLibrary = sdk.getSchemaLibrary();
schemaLibrary.addSchema({
name: 'enemy',
description: 'Enemy character stats',
schema: {
type: 'object',
properties: {
name: { type: 'string' },
health: { type: 'number' },
damage: { type: 'number' },
attacks: { type: 'array', items: { type: 'string' } }
},
required: ['name', 'health', 'damage', 'attacks']
}
});
// Generate using schema name
const enemy = await chat.generateStructuredByName(
'enemy',
'Create a fire dragon boss'
);
console.log(enemy);
// { name: "Inferno Drake", health: 5000, damage: 150, attacks: [...] }Using Inline Schema
const item = await chat.generateStructuredWithSchema(
{
type: 'object',
properties: {
name: { type: 'string' },
damage: { type: 'number' },
rarity: { type: 'string', enum: ['common', 'rare', 'epic', 'legendary'] }
},
required: ['name', 'damage', 'rarity']
},
'Create a legendary sword',
{ schemaName: 'weapon', model: 'gpt-4o-mini' }
);TypeScript Integration
interface Enemy {
name: string;
health: number;
damage: number;
attacks: string[];
}
const enemy = await chat.generateStructuredByName<Enemy>(
'enemy',
'Create a frost giant'
);
// Full type safety
console.log(enemy.name); // string
console.log(enemy.health); // numberFor more advanced structured output features, see Structured Output.
Supported Models
| Model | Description | Use Case |
|---|---|---|
gpt-4o | GPT-4 Omni | Best performance, complex tasks |
gpt-4o-mini | GPT-4 Omni Mini | Recommended, best value |
gpt-4 | GPT-4 | High quality output |
gpt-3.5-turbo | GPT-3.5 Turbo | Fast response |
Choosing a Model
// High quality, complex reasoning
const gpt4 = sdk.createChatClient('gpt-4o');
// Balance performance and cost (recommended)
const mini = sdk.createChatClient('gpt-4o-mini');
// Fast, simple tasks
const turbo = sdk.createChatClient('gpt-3.5-turbo');Error Handling
import { PlayKitError } from 'playkit-sdk';
try {
const result = await chat.textGeneration({ messages });
} catch (error) {
if (error instanceof PlayKitError) {
console.error(`[${error.code}] ${error.message}`);
switch (error.code) {
case 'NOT_AUTHENTICATED':
console.log('Need to login');
break;
case 'INSUFFICIENT_CREDITS':
console.log('Not enough credits');
break;
case 'CHAT_ERROR':
console.log('Chat failed:', error.message);
break;
case 'CHAT_STREAM_ERROR':
console.log('Streaming failed:', error.message);
break;
case 'PARSE_ERROR':
console.log('Failed to parse response');
break;
default:
console.log('Unknown error');
}
}
}Best Practices
1. Choose the Right Model
// Good: use cheaper model for simple tasks
const chat = sdk.createChatClient('gpt-4o-mini');
// Use expensive model only when needed
const complexChat = sdk.createChatClient('gpt-4o');2. Use System Prompts Effectively
// Good: clear role and rules
const response = await chat.chat(
'What should I do?',
'You are a wise wizard. Keep responses under 50 words. Be mysterious.'
);
// Bad: no context
const response = await chat.chat('What should I do?');3. Control Output Length
// Good: limit tokens for predictable costs
const result = await chat.textGeneration({
messages: [{ role: 'user', content: 'Summarize' }],
maxTokens: 100
});
// Bad: unlimited response
const result = await chat.textGeneration({
messages: [{ role: 'user', content: 'Explain everything' }]
});4. Use Streaming for Better UX
// Good: immediate feedback
await chat.chatStream('Long story', (chunk) => display(chunk));
// Bad: user waits for complete response
const response = await chat.chat('Long story');
display(response);5. Manage Conversation History
// Good: limit history length
const messages = conversationHistory.slice(-10);
// Bad: unlimited history growth
const messages = fullConversationHistory; // Could be thousands6. Reuse Client Instances
// Good: reuse client
const chat = sdk.createChatClient();
await chat.chat('Q1');
await chat.chat('Q2');
// Bad: repeated creation
await sdk.createChatClient().chat('Q1');
await sdk.createChatClient().chat('Q2');Configuration Reference
ChatConfig
| Option | Type | Default | Description |
|---|---|---|---|
messages | Message[] | required | Conversation messages |
model | string | 'gpt-4o-mini' | AI model to use |
temperature | number | 0.7 | Randomness (0.0-2.0) |
maxTokens | number | - | Maximum tokens to generate |
seed | number | - | Random seed for reproducibility |
stop | string[] | - | Stop sequences |
topP | number | - | Top-P sampling (0.0-1.0) |
Message Roles
| Role | Description |
|---|---|
system | Instructions for AI behavior |
user | User input messages |
assistant | AI responses |
tool | Tool execution results |
Next Steps
- Learn about NPC Conversations for game characters with automatic history
- Explore Structured Output for type-safe data generation
- Check API Reference for complete method details