AI Helpers
`forgetless` re-exports local ML helpers for embeddings, text generation, and optional image description. These match the modules under `src/ai`.
Embeddings
1use forgetless::{cosine_similarity, embed_batch, embed_text, EmbeddingCache};23let query = embed_text("release summary")?;4let docs = embed_batch(&["design review", "old notes"])?;5let score = cosine_similarity(&query, &docs[0]);67let mut cache = EmbeddingCache::new(10_000);
The embedding backend uses FastEmbed with the quantized `all-MiniLM-L6-v2` model and an in-process LRU cache.
Local LLM
| Preset | Model ID |
|---|---|
| LLMConfig::smollm2() | HuggingFaceTB/SmolLM2-135M-Instruct |
| LLMConfig::smollm2_360m() | HuggingFaceTB/SmolLM2-360M-Instruct |
| LLMConfig::qwen_0_5b() | Qwen/Qwen2.5-0.5B-Instruct |
| LLMConfig::phi3_mini() | microsoft/Phi-3-mini-4k-instruct |
1use forgetless::{LLM, LLMConfig, Quantization};23LLM::init_with_config(4 LLMConfig::qwen_0_5b()5 .with_quantization(Quantization::Q4)6 .with_max_tokens(256),7).await?;
Vision
1use forgetless::{describe_image, describe_image_with_prompt, init_vision, is_vision_ready};23if !is_vision_ready() {4 init_vision().await?;5}67let caption = describe_image(&image_bytes).await?;8let answer = describe_image_with_prompt(&image_bytes, "What text is visible?").await?;
The vision helper uses `HuggingFaceTB/SmolVLM-256M-Instruct` through `mistralrs` and resizes inputs to a fixed 224x224 shape before inference.