Search/
Skip to content
/
OpenRouter
© 2026 OpenRouter, Inc

Product

  • Chat
  • Rankings
  • Apps
  • Models
  • Providers
  • Pricing
  • Enterprise
  • Labs

Company

  • About
  • Announcements
  • CareersHiring
  • Privacy
  • Terms of Service
  • Support
  • State of AI
  • Works With OR
  • Data

Developer

  • Documentation
  • API Reference
  • SDK
  • Status

Connect

  • Discord
  • GitHub
  • LinkedIn
  • X
  • YouTube
Favicon for cognitivecomputations

Cognitive Computations

Access 5 Cognitive Computations models on OpenRouter including Dolphin3.0 R1 Mistral 24B, Dolphin3.0 Mistral 24B, and Dolphin Llama 3 70B 🐬. Compare pricing, context windows, and capabilities.

Cognitive Computations tokens processed on OpenRouter

Not enough data to display yet.

  • Favicon for cognitivecomputations
    Dolphin3.0 R1 Mistral 24BDolphin3.0 R1 Mistral 24B

    Dolphin 3.0 R1 is the next generation of the Dolphin series of instruct-tuned models. Designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases. The R1 version has been trained for 3 epochs to reason using 800k reasoning traces from the Dolphin-R1 dataset. Dolphin aims to be a general purpose reasoning instruct model, similar to the models behind ChatGPT, Claude, Gemini. Part of the Dolphin 3.0 Collection Curated and trained by Eric Hartford, Ben Gitter, BlouseJury and DphnAI

    by cognitivecomputationsFeb 13, 202533K context
  • Favicon for cognitivecomputations
    Dolphin3.0 Mistral 24BDolphin3.0 Mistral 24B

    Dolphin 3.0 is the next generation of the Dolphin series of instruct-tuned models. Designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases. Dolphin aims to be a general purpose instruct model, similar to the models behind ChatGPT, Claude, Gemini. Part of the Curated and trained by , , and

  • Favicon for cognitivecomputations
    Dolphin Llama 3 70B 🐬Dolphin Llama 3 70B 🐬

    Dolphin 2.9 is designed for instruction following, conversational, and coding. This model is a fine-tune of . It demonstrates improvements in instruction, conversation, coding, and function calling abilities, when compared to the original. Uncensored and is stripped of alignment and bias, it requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at . Usage of this model is subject to .

  • Favicon for cognitivecomputations
    Dolphin 2.9.2 Mixtral 8x22B 🐬Dolphin 2.9.2 Mixtral 8x22B 🐬

    Dolphin 2.9 is designed for instruction following, conversational, and coding. This model is a finetune of . It features a 64k context length and was fine-tuned with a 16k sequence length using ChatML templates. This model is a successor to . The model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at . #moe #uncensored

  • Favicon for cognitivecomputations
    Dolphin 2.6 Mixtral 8x7B 🐬Dolphin 2.6 Mixtral 8x7B 🐬

    This is a 16k context fine-tune of Mixtral-8x7b. It excels in coding tasks due to extensive training with coding data and is known for its obedience, although it lacks DPO tuning. The model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at . #moe #uncensored

Dolphin 3.0 Collection
Eric Hartford
Ben Gitter
BlouseJury
DphnAI
by cognitivecomputationsFeb 13, 202533K context
Llama 3 70B
erichartford.com/uncensored-models
Meta's Acceptable Use Policy
by cognitivecomputationsJul 19, 20248K context
Mixtral 8x22B Instruct
Dolphin Mixtral 8x7B
erichartford.com/uncensored-models
by cognitivecomputationsJun 8, 202466K context
erichartford.com/uncensored-models
by cognitivecomputationsDec 21, 202333K context