Get structured, parsable, and immediately usable output from LLMs
The difference between usable and unusable AI output often comes down to formatting. Structured output enables automation, integration, and programmatic processing of LLM responses.
Don't just ask for JSON — provide the exact structure with field names, types, and nesting.
Extract contact information from this email and return as JSON:
{
"contacts": [
{
"name": "string",
"email": "string",
"phone": "string or null",
"company": "string or null",
"title": "string or null"
}
],
"meeting_requested": "boolean",
"urgency": "low | medium | high"
}
Return ONLY valid JSON with no additional text or explanation.
LLMs sometimes add commentary before or after JSON. Be explicit about what you need.
❌ Bad Output:
Here's the JSON you requested:
{"name": "John"}
Hope this helps!
✓ Good Output:
{"name": "John"}
Add to prompt: "Output ONLY the JSON object with no additional text, explanation, or markdown formatting."
Specify how to handle missing or unavailable information in the output.
Include in your prompts:
null (not empty string)"
"confidence": 0.0-1.0"
[] if no items found, not null"
Specify column headers, data types, and the format (Markdown, CSV, HTML, etc.)
Compare these 5 project management tools and present as a Markdown table.
Columns (in this order):
Format as a Markdown table with proper alignment. Add a row separator between headers and data.
Numbered, bulleted, or nested? Sorted by what criteria? Be specific.
Examples:
"Return as a numbered list, sorted by priority (highest first)"
"Format as nested bullet points: main categories as bullets, sub-items indented with dashes"
"Create a checklist with [ ] for incomplete items"
Request specific Markdown features for properly formatted, readable output.
Write a technical guide and format as Markdown with these requirements:
## for main sections, ### for subsections`backticks`) for commands, file names, variables>) for warnings or important notes[text](url) for external referencesWhen building AI-powered APIs, structure responses for easy parsing and error handling.
Analyze customer sentiment and return response in this format:
{
"status": "success",
"data": {
"sentiment": "positive | neutral | negative",
"confidence": 0.95,
"score": -1.0 to 1.0,
"key_themes": ["string", "string"],
"suggested_action": "string"
},
"metadata": {
"model": "claude-3-opus",
"processing_time_ms": 234
}
}
Define how the LLM should respond when it can't fulfill the request.
Add to prompts:
"If you cannot complete the task, return:
{
"status": "error",
"error_code": "INSUFFICIENT_DATA | AMBIGUOUS_REQUEST | etc",
"message": "Human-readable error description",
"suggestions": ["What user should provide"]
}
Do not make up data or guess when information is missing."
Show 1-2 examples of the exact format you want
Parse the output to verify it's valid JSON/XML/CSV before using it
For complex prompts, use delimiters like ```json...``` to separate output from reasoning
Be explicit: "string", "number", "boolean", "array", "ISO 8601 date", etc.
"Return as JSON" is too vague. LLMs may add markdown code blocks or explanations
Define how to handle empty results, nulls, very large outputs, and special characters
We can help you design API-ready prompts and workflows that produce consistent, parsable output