Metadata-Version: 2.4
Name: npmai
Version: 0.1.4
Summary: A lightweight Ollama(no ollama installation needed) based LangChain-compatible LLM bridge ('LLaMA-3.2','CodeLLaMA-Instruct 7B','Gemma-2-Instruct 9B','Mistral 7B Instruct','Qwen-2.5-Coder 7B','Phi-3 Medium (8B)','Falcon 7B Instruct','Baichuan-2-7B','InternLM-Chat-7B','Vicuna 7B') built by Sonu Kumar.
Author-email: "Sonu Kumar (Viral Boy Bihar)" <sonuramashishnpm@gmail.com>
License: MIT
Project-URL: Homepage, https://npmai.onrender.com
Project-URL: Source, https://github.com/sonuramashishnpm/npmai
Requires-Python: <3.14,>=3.10
Description-Content-Type: text/markdown
Requires-Dist: requests
Requires-Dist: langchain-core

# 🚀 npmai 
**By Sonu Kumar Ramashish**

[![PyPI version](https://img.shields.io)](https://pypi.org/project/npmai/0.1.4)
[![License: MIT](https://img.shields.io)](https://opensource.org)

`npmai` is a lightweight Python package designed to bridge the gap between users and open-source LLMs. Connect with **Ollama** and 10+ other powerful models instantly—**no installation, no login, and no API keys required.**

---

## ✨ Features

- 🔗 **Zero Setup:** No local Ollama installation or complex API signups needed.
- 🤖 **Multi-Model Support:** Execute prompts across 10+ open-source models simultaneously.
- 🧠 **Built-in Memory:** (New in v0.1.2) Native memory support—no need for external Agentic frameworks.
- 🕵️‍♂️🔍📑 **RAG Frame-Work:** **no need to install Whisper or any model locally,no need to write code for the pdf,image,video,yt-video to text  just use npmai**
- ⚡ **Framework Ready:** Fully compatible with **LangChain**, **CrewAI**, and other orchestration tools.
- 🛠️ **Universal API:** Access via Python, JavaScript, C++, Java, or C.

---

## 🖥️ Supported Models

| Model Name | Description |
| :--- | :--- |
| `llama3.2` | Meta's latest powerful small model |
| `gemma-2-instruct-9b` | Google's high-performance open model |
| `qwen-2.5-coder-7b` | Alibaba's elite coding assistant |
| `mistral-7b-instruct` | Versatile and efficient instructor model |
| `phi-3-medium` | Microsoft's highly capable reasoning model |
| *And many more...* | Falcon, Baichuan-2, InternLM, Vicuna |

---

## ⚙️ Installation

Install via pip in seconds:

```bash
pip install npmai
Use code with caution.

Tip for Python 3.13+: Use py -3.13 -m pip install npmai
💡 Quick Start (Python)
python
from npmai import Ollama

# Initialize the LLM
llm = Ollama()      

# Simple invocation
response = llm.invoke("What is the future of AI?", model="llama3.2")
print(response) 

🌐 API Usage (Other Languages)
If you aren't using Python, hit our global endpoint:
POST https://npmai-api.onrender.com
🟡 JavaScript
javascript
const response = await fetch("https://npmai-api.onrender.com", {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({
    prompt: "Hello! Who are you?",
    model: "llama3.2",
    temperature: 0.4
  })
});
const data = await response.json();
console.log(data.response);

🔵 C++
cpp
nlohmann::json payload = {
    {"prompt", "Explain quantum physics."},
    {"model", "llama3.2"},
    {"temperature", 0.4}
};
auto res = cli.Post("/llm", payload.dump(), "application/json");

🆕 Latest Update: Version 0.1.4
Now you do not need to write code for RAG tools like pdf,image,video,audio,yt-video to text and no need to load whisper and other requirements locally no local process everything on cloud in free without any signup or singin or key hurdles.
⚠️ Important Notes
Experimental Use: This project is designed for educational purposes, small-scale experimentation, and demo projects.
Scale Responsibly: For high-volume production traffic, please support the original AI researchers and infrastructure providers.
🔗 Resources
Documentation: npmai.onrender.com
API Endpoint: npmai-api.onrender.com/llm
Developed with ❤️ to make AI accessible to everyone.
Developer and Maintainer:- Sonu Kumar
