Showing 1-8 of 8 projects
Open-source Chinese LLaMA and Alpaca large language models for local CPU/GPU training and deployment.
An open-source framework for prompt-learning, a powerful technique for fine-tuning language models.
Must-read papers on prompt-based tuning for pre-trained language models.
Top2Vec learns jointly embedded topic, document and word vectors for semantic search and topic modeling.
A pre-trained RoBERTa language model for Chinese NLP tasks.
An open-source framework for building knowledgeable large language models with fine-tuning capabilities.
A curated list of must-read papers on knowledge editing for large language models.
A curated list of NLP resources focused on Transformer networks, attention mechanism, and large language models.
Get weekly updates on trending AI coding tools and projects.