Configuration
`Config` covers the runtime switches most projects need. Lower-level tuning is available through `ForgetlessConfig`, `ChunkConfig`, `ScoringConfig`, and `TokenizerModel`.
Runtime options
1use forgetless::{Config, Forgetless};23let result = Forgetless::new()4 .config(5 Config::default()6 .context_limit(96_000)7 .chunk_size(256)8 .vision_llm(true)9 .context_llm(true)10 .parallel(true)11 .cache(true),12 )13 .add_file("diagram.png")14 .run()15 .await?;
| Field | Type | Default | Effect |
|---|---|---|---|
| context_limit | usize | 128,000 | Maximum output token budget. |
| vision_llm | bool | false | Initializes SmolVLM-256M for image description. |
| context_llm | bool | false | Initializes SmolLM2-135M for post-selection polishing. |
| chunk_size | usize | 512 | Syncs the chunk target token size. |
| parallel | bool | true | Stored on config; the current builder path still uses parallel file reads regardless. |
| cache | bool | true | Stored on config; the current global embedding cache is not toggled by this flag yet. |
`parallel` and `cache` are part of the public config surface, but in the current source they are stored rather than used to branch runtime behavior.
Local model behavior
1use forgetless::{Config, Forgetless};23// Fast path: embeddings + heuristics only4let fast = Forgetless::new()5 .config(Config::default().context_limit(64_000))6 .add_file("paper.pdf");78// Add local image descriptions9let with_vision = Forgetless::new()10 .config(Config::default().vision_llm(true))11 .add_file("diagram.png");1213// Add local text polishing14let with_context_llm = Forgetless::new()15 .config(Config::default().context_llm(true))16 .add(long_context);
Vision mode loads HuggingFaceTB/SmolVLM-256M-Instruct. Context polish mode loads HuggingFaceTB/SmolLM2-135M-Instruct and only runs after chunk selection.
Chunk tuning
1use forgetless::{ChunkConfig, Config, ForgetlessConfig, ScoringConfig};23let advanced = ForgetlessConfig::new(4 Config::default().context_limit(64_000),5)6.with_chunk(7 ChunkConfig::for_quality()8 .with_target_tokens(256)9 .with_max_tokens(512)10 .with_min_tokens(10),11)12.with_scoring(ScoringConfig {13 semantic_weight: 0.6,14 keyword_weight: 0.25,15 priority_weight: 0.15,16});
| Preset | Target | When to use it |
|---|---|---|
| ChunkConfig::default() | 512 | General-purpose prompt compression. |
| ChunkConfig::for_code() | 256 | More granular code selection. |
| ChunkConfig::for_conversation() | 200 | Message-oriented chat history. |
| ChunkConfig::for_speed() | 1000 | Faster processing with larger chunks. |
| ChunkConfig::for_quality() | 256 | Smaller chunks for more precise selection. |
Tokenizer options
1use forgetless::{Config, ForgetlessConfig, TokenizerModel};23let config = ForgetlessConfig::new(Config::default())4 .with_tokenizer(TokenizerModel::Custom {5 chars_per_token_x100: 400,6 });
The default tokenizer uses cl100k_base. A custom characters-per-token ratio is available when you want a lightweight estimate instead.