MLX Swift LM - Run LLMs and VLMs on Apple Silicon using MLX. Covers local inference, streaming, tool calling, LoRA fine-tuning, and embeddings.
Initial release of mlx-swift-lm. - Run Large Language Models (LLMs) and Vision-Language Models (VLMs) locally on Apple Silicon using MLX. - Supports local inference, streaming text generation, and both single-turn and multi-turn chat via a ChatSession API. - Enables tool/function calling, LoRA/DoRA fine-tuning, and text embeddings for search/semantic applications. - Provides Swift-friendly factory/load interfaces for a variety of model types (LLM, VLM, Embeddings). - Offers quick-start code examples and comprehensive API references for common ML workflows.