Explore Projects

Discover 1,105 open source projects

Active filters (1):
Search: llmร—
Clear all

Showing 1101-1105 of 1,105 projects

myscale/MyScaleDB

A high-performance vector search and full-text search database fork of ClickHouse, focused on use cases for AI and ML developers.

1.0K
Experimental
C++
Vector Databases
Databases
#ann#big-data#embedding

beyondguo/LLM-Tuning

A library for efficiently fine-tuning large language models (LLMs) with a sample-based approach.

1.0K
Archived
HTML
LLM Frameworks
Fine-tuning
React
#large-language-model#fine-tuning#sample-based-design

dillionverma/llm.report

An open-source platform for logging and analyzing OpenAI API usage, including cost tracking and prompt improvement.

1.0K
Archived
TypeScript
LLM Wrappers & SDKs
MCP Frameworks
Next.js
#gpt-3#gpt-4#openai

JudgmentLabs/judgeval

An open-source post-building layer for AI agents, providing environment data and evaluations to power agent post-training and monitoring.

1.0K
Active
Python
Agents & Orchestration
LLM Frameworks
Python
#agent#agentic-ai#llm-evaluation

bilibili/Index-1.9B

A lightweight multilingual Large Language Model (LLM) for natural language processing tasks.

1.0K
Stable
Python
LLM Frameworks
LLM Wrappers & SDKs
Python
#llm#nlp#natural-language-processing

wang-rui/phishguard-scaffold

A Python-based tool for detecting and mitigating phishing attacks on social media using LLM-based modeling.

1.0K
Stable
Python
LLM Frameworks
Security Research
#phishing-detection#social-media-security#llm-modeling

viddexa/autollm

An open-source Python library that helps developers quickly build and deploy RAG-based LLM web apps.

1.0K
Archived
Python
LLM Frameworks
RAG & Vector
FastAPI
#llm#retrieval-augmented-generation#vector-database

WangRongsheng/CareGPT

CareGPT is an open-source, medical large language model (LLM) that aims to promote the rapid development of medical LLMs.

1.0K
Archived
Python
LLM Frameworks
Fine-tuning
Python
#medical-llm#open-source#fine-tuning

pjlab-sys4nlp/llama-moe

A library for building Mixture-of-Experts (MoE) models from the LLaMA language model with continual pre-training.

1.0K
Archived
Python
LLM Frameworks
Continual Pre-training
PyTorch
#llama#moe#continual-pretraining
1...55

Stay in the loop

Get weekly updates on trending AI coding tools and projects.