Docs / Releases

April 2026

v0.6.0

The biggest release yet — Vellum goes open source, introduces platform-hosted assistants, a completely revamped memory system with multimodal support, Outlook feature parity, conversation folders, and the new Heartbeat system for proactive assistant check-ins.

  • Open source launch: The Vellum Assistant repository is now publicly available, inviting the community to explore, contribute, and build on the platform
  • Platform-hosted assistants: Assistants can now run fully hosted on the Vellum platform, removing the need for local infrastructure and enabling seamless cloud-based operation
  • Revamped memory system with multimodal support: The memory graph has been completely overhauled with image references, multimodal embeddings, event-dated memories, hybrid dense/sparse search for recall, and smarter injection and consolidation — replacing the legacy memory items table entirely
  • Outlook Calendar and Email reach full feature parity with Google: Outlook Calendar and Outlook Email integrations are now generally available, matching the functionality previously available only for Google Calendar and Gmail
  • Conversation folders and sidebar improvements: A full conversation groups/folders system has been introduced, with system groups seeded on first launch, source-based subgrouping, improved icons and count badges, and a polished delete group flow
  • Heartbeat — proactive assistant check-ins: The new Heartbeat system enables the assistant to periodically review its scratchpad, reflect on recent conversations, and proactively reach out when it has something worth sharing — now enabled by default
  • Skills API redesign: Skills have been refactored with a new kind/origin/status model, unified search across all skill registries, improved install paths, and better handling of capability nodes including seeding, pruning, and injection ordering
  • Performance and stability improvements across macOS and iOS: Caching visible conversations and paginated messages to eliminate multi-second hangs, moving expensive operations off the main thread, caching NSImage instances and font warmup, and adding Equatable conformance to reduce unnecessary SwiftUI re-renders

v0.5.16

Major macOS performance and stability improvements, Outlook messaging support, smarter assistant context and memory, security hardening, and polished UI components.

  • Significant macOS performance and stability improvements: fixes for chat scroll freezes, sidebar re-render cascades, main thread blocking during file I/O, and SwiftUI invalidation issues — resulting in a noticeably smoother and more responsive experience
  • Outlook messaging support: Vellum can now connect to Microsoft Outlook as a messaging provider, joining the existing Slack integration and expanding where the assistant can be reached
  • Smarter assistant context and memory: the assistant now seeds capability memories for all skills (including bundled ones) and CLI commands at startup, improves semantic search, and better manages context window estimates
  • Security hardening across the assistant and gateway: removal of dangerouslySkipPermissions, stricter risk classifications for CLI subcommands and hooks directory mutations, validation of symlink targets before spawning, and tightened admin route authorization
  • Polished UI components and design consistency: redesigned skill detail page, shared file browser, improved dropdown and navigation items, context window indicator, and a consistent page container layout

v0.5.15

CLI signing key handling improvements and automatic key migration for smoother upgrades.

  • Improved signing key handling in the CLI to ensure the gateway's on-disk key is correctly prioritized, preventing potential authentication issues
  • Automatic migration of signing keys from the gateway disk when upgrading from versions prior to v0.5.14, ensuring a smooth upgrade experience without manual intervention

v0.5.14

Thinking blocks in chat, overhauled memory and retrieval, /compact command, expanded model support, and collapsible sidebar sections.

  • Thinking blocks are now visible in chat: the assistant's reasoning process is rendered inline as collapsible thinking blocks, giving users transparency into how responses are formed — thinking is now enabled by default
  • Significantly improved memory and retrieval: batched extraction, HyDE query expansion, MMR diversity ranking, a serendipity layer for surfacing unexpected relevant memories, and a new top-N retrieval format
  • New /compact slash command and context window indicator: manually trigger context compaction at any time, with a color-coded bar in the toolbar showing how full the context window is
  • Expanded model support and OpenRouter catalog: DeepSeek, Qwen, Mistral, Meta, Moonshot, and Amazon models added; Anthropic's 1M context window beta and fast mode now supported; OpenAI reasoning effort wired through to the API
  • Collapsible sidebar sections, channel conversations, and macOS polish: Scheduled and Background sidebar sections now collapsible with persisted state, channel-bound conversations displayed with read-only treatment