Open Source Models

https://github.com/Mindgrid-x1

Our open-source repositories provide foundational tools for robotics developers, enabling experimentation, prototyping, and community contribution. Current and planned modules include:

  • mindgrid-smolVLM: Lightweight visual language model server for real-time object detection and scene captions; LAN-based web UI; CPU- and GPU-accelerated inference.

  • mindgrid-llm: Instruction-following LLM tuned for robotics reasoning, task decomposition, and safety checks.

  • mindgrid-tts: Neural TTS with multilingual voices, edge- and cloud-deployable variants, and emotion presets.

  • mindgrid-stt: Low-latency speech recognition, noise robustness, and diarization hooks for multi-speaker scenarios.

  • mindgrid-motion: Motion planning primitives, navigation, and grasp planning, integrated with ROS2 and simulation platforms (MuJoCo, PyBullet).

  • mindgrid-eval: Evaluation harnesses with scenario libraries, golden answers, and reproducible benchmarks for all AI modules.

  • mindgrid-ros2-integration: Drop-in ROS2 nodes, message definitions, and bridges to connect MindGrid APIs with robotic control stacks.

All open-source modules are released under permissive licenses and are designed for fast iteration and interoperability, allowing developers to experiment on any chassis.

Last updated