🤖 emlog Integration with LLMs and AI Applications
Configure AI
The latest version of the emlog system supports configuring AI LLMs, allowing easy integration with models like DeepSeek and OpenAI. In the backend, click on the left system menu - Settings - AI to complete the addition of AI LLMs and start experiencing AI functions such as conversation, text generation, and image generation.
Text Chat Models
The system supports adding OpenAI protocol LLMs. Configuration examples for well-known LLMs are as follows:
OpenAI
- API URL:
https://api.openai.com/v1/chat/completions - API Key: Generate API Key, format:
sk-**** - Model:
gpt-5-2025-08-07gpt-5-mini-2025-08-07gpt-4.1-2025-04-14gpt-4o-2024-11-20- All Models
Google Gemini
- API URL:
https://generativelanguage.googleapis.com/v1beta/openai/chat/completions - API Key: Generate API Key
- Model:
gemini-2.0-flashgemini-2.5-pro-preview-06-05
Claude
- API URL:
https://api.anthropic.com/v1/messages - API Key: Generate API Key
- Model:
claude-sonnet-4-20250514claude-3-7-sonnet-20250219claude-3-5-sonnet-20241022
Grok
- API URL:
https://api.x.ai/v1/chat/completions - API Key: Generate API Key
- Model:
grok-4-0709grok-3grok-3-fastgrok-3-mini
Image Generation Models
OpenAI DALL-E
- API URL:
https://api.openai.com/v1/images/generations - API Key: Generate API Key, format:
sk-**** - Model:
dall-e-3
FAQ
Why is the AI conversation response slow?
- It may be a problem with the AI model provider. You can try switching to an AI model from another service provider.
- It may be a server network problem. The AI interface requests are relayed through the server. If the network is unstable, it will cause delays.