AI commands enable post-processing of transcriptions using large language models for grammar correction, formatting, and punctuation.
AI Settings
get_ai_settings
Get current AI enhancement settings.
#[tauri::command]
pub async fn get_ai_settings(
app: AppHandle,
) -> Result<AISettings, String>
Returns:
interface AISettings {
enabled: boolean; // AI enhancement enabled/disabled
provider: string; // "openai", "gemini", or "custom"
model: string; // Model ID (e.g., "gpt-5-nano")
hasApiKey: boolean; // API key configured
}
Usage:
const settings = await invoke<AISettings>('get_ai_settings');
if (settings.enabled && settings.hasApiKey) {
console.log(`AI enabled with ${settings.provider}/${settings.model}`);
} else if (settings.enabled && !settings.hasApiKey) {
console.warn('AI enabled but no API key configured');
}
get_ai_settings_for_provider
Get AI settings for a specific provider.
#[tauri::command]
pub async fn get_ai_settings_for_provider(
provider: String,
app: AppHandle,
) -> Result<AISettings, String>
Provider name: "openai", "gemini", or "custom"
Usage:
const geminiSettings = await invoke<AISettings>('get_ai_settings_for_provider', {
provider: 'gemini'
});
update_ai_settings
Update AI enhancement settings.
#[tauri::command]
pub async fn update_ai_settings(
enabled: bool,
provider: String,
model: String,
app: AppHandle,
) -> Result<(), String>
Enable or disable AI enhancement
AI provider: "openai", "gemini", or "custom"
Model ID (e.g., "gpt-5-nano", "gemini-3-flash-preview")
Usage:
await invoke('update_ai_settings', {
enabled: true,
provider: 'openai',
model: 'gpt-5-nano'
});
Errors:
"Please select a model before enabling AI enhancement" - Model empty
"API key not found. Please add an API key first." - No API key cached
disable_ai_enhancement
Disable AI enhancement.
#[tauri::command]
pub async fn disable_ai_enhancement(
app: AppHandle,
) -> Result<(), String>
Usage:
await invoke('disable_ai_enhancement');
API Key Management
cache_ai_api_key
Cache an API key for backend use (called on app startup).
#[tauri::command]
pub async fn cache_ai_api_key(
_app: AppHandle,
args: CacheApiKeyArgs,
) -> Result<(), String>
Provider: "openai", "gemini", or "custom"
Usage:
// Frontend stores key in Stronghold, then caches for backend
await invoke('cache_ai_api_key', {
provider: 'openai',
apiKey: 'sk-...' // From secure storage
});
Note: This command does NOT validate the API key. Use validate_and_cache_api_key for new keys.
validate_and_cache_api_key
Validate and cache a new API key.
#[tauri::command]
pub async fn validate_and_cache_api_key(
app: AppHandle,
args: ValidateAndCacheApiKeyArgs,
) -> Result<(), String>
Provider: "openai", "gemini", or "custom"
API key (optional for noAuth: true)
Custom base URL (for OpenAI-compatible APIs)
Model to test (defaults to "gpt-5-nano")
Skip authentication (for local LLMs)
Usage:
// Validate OpenAI key
try {
await invoke('validate_and_cache_api_key', {
provider: 'openai',
apiKey: 'sk-...',
model: 'gpt-5-nano'
});
console.log('API key validated successfully');
} catch (error) {
console.error('Invalid API key:', error);
}
// Validate custom OpenAI-compatible endpoint
await invoke('validate_and_cache_api_key', {
provider: 'custom',
baseUrl: 'http://localhost:1234/v1',
model: 'local-model',
noAuth: true // No API key needed for local LLM
});
Validation:
- For OpenAI/custom: Sends test request to
/v1/models or /v1/chat/completions
- Checks if specified model exists
- Caches key only if validation succeeds
Errors:
"HTTP 401: Unauthorized" - Invalid API key
"Model 'xyz' not found in endpoint model list" - Model doesn’t exist
"Network error" - Connection failed
test_openai_endpoint
Test an OpenAI-compatible endpoint without saving.
#[tauri::command]
pub async fn test_openai_endpoint(
base_url: String,
model: String,
api_key: Option<String>,
no_auth: Option<bool>,
) -> Result<(), String>
Base URL (e.g., "http://localhost:1234/v1")
Usage:
// Test local LLM endpoint
try {
await invoke('test_openai_endpoint', {
baseUrl: 'http://localhost:1234/v1',
model: 'llama-3.1',
noAuth: true
});
console.log('Endpoint is valid');
} catch (error) {
console.error('Endpoint test failed:', error);
}
clear_ai_api_key_cache
Clear cached API key for a provider.
#[tauri::command]
pub async fn clear_ai_api_key_cache(
_app: AppHandle,
provider: String,
) -> Result<(), String>
Usage:
await invoke('clear_ai_api_key_cache', { provider: 'openai' });
Enhancement
enhance_transcription
Enhance transcribed text using AI.
#[tauri::command]
pub async fn enhance_transcription(
text: String,
app: AppHandle,
) -> Result<String, String>
Raw transcription text to enhance
Returns: Enhanced text with proper grammar, punctuation, and formatting
Usage:
const rawText = "hello world this is a test";
const enhanced = await invoke<string>('enhance_transcription', {
text: rawText
});
console.log('Enhanced:', enhanced);
// Output: "Hello world, this is a test."
Behavior:
- Checks if AI is enabled in settings
- Returns original text if disabled
- Loads provider config (API key, base URL, model)
- Sends text to LLM with enhancement prompt
- Returns formatted response
Enhancement Options:
See get_enhancement_options and update_enhancement_options for customization.
Errors:
"AI enhancement is disabled" - Not enabled in settings
"API key not found in cache" - Missing API key
"AI formatting failed: ..." - LLM request failed
get_enhancement_options
Get AI enhancement formatting options.
#[tauri::command]
pub async fn get_enhancement_options(
app: AppHandle,
) -> Result<EnhancementOptions, String>
Returns:
interface EnhancementOptions {
preset: 'professional' | 'casual' | 'technical' | 'creative';
custom_instructions?: string;
}
Usage:
const options = await invoke<EnhancementOptions>('get_enhancement_options');
console.log('Using preset:', options.preset);
update_enhancement_options
Update AI enhancement options.
#[tauri::command]
pub async fn update_enhancement_options(
options: EnhancementOptions,
app: AppHandle,
) -> Result<(), String>
Formatting preset: "professional", "casual", "technical", or "creative"
options.custom_instructions
Additional instructions for the LLM
Usage:
await invoke('update_enhancement_options', {
options: {
preset: 'technical',
custom_instructions: 'Format code snippets with markdown'
}
});
Provider Models
list_provider_models
Get curated list of models for a provider.
#[tauri::command]
pub async fn list_provider_models(
provider: String,
_app: AppHandle,
) -> Result<Vec<ProviderModel>, String>
Provider: "openai" or "gemini"
Returns:
interface ProviderModel {
id: string; // Model ID (e.g., "gpt-5-nano")
name: string; // Display name (e.g., "GPT-5 Nano")
recommended: boolean; // Official recommendation
}
Usage:
const models = await invoke<ProviderModel[]>('list_provider_models', {
provider: 'openai'
});
const recommended = models.find(m => m.recommended);
console.log('Recommended:', recommended.name);
Available Models:
OpenAI:
gpt-5-nano - GPT-5 Nano (recommended)
gpt-5-mini - GPT-5 Mini (recommended)
Gemini:
gemini-3-flash-preview - Gemini 3 Flash (recommended)
gemini-2.5-flash - Gemini 2.5 Flash (recommended)
gemini-2.5-flash-lite - Gemini 2.5 Flash Lite (recommended)
OpenAI Configuration
set_openai_config
Configure OpenAI-compatible endpoint.
#[tauri::command]
pub async fn set_openai_config(
app: AppHandle,
args: SetOpenAIConfigArgs,
) -> Result<(), String>
Base URL (e.g., "http://localhost:1234/v1")
Skip authentication (optional)
Usage:
await invoke('set_openai_config', {
baseUrl: 'http://localhost:1234/v1',
noAuth: true
});
get_openai_config
Get current OpenAI configuration.
#[tauri::command]
pub async fn get_openai_config(
app: AppHandle,
) -> Result<OpenAIConfig, String>
Returns:
interface OpenAIConfig {
baseUrl: string;
noAuth: boolean;
}
Usage:
const config = await invoke<OpenAIConfig>('get_openai_config');
console.log('Endpoint:', config.baseUrl);
Supported Providers
| Provider | Models | API Key Required |
|---|
| OpenAI | GPT-5 Nano, GPT-5 Mini | Yes |
| Gemini | Gemini 3 Flash, Gemini 2.5 Flash | Yes |
| Custom | Any OpenAI-compatible | Optional |
See Also