gaoliye 0a52a81a03 update MiniCpm | 2 月之前 | |
---|---|---|
.. | ||
MiniCPM.ipynb | 2 月之前 | |
MiniCPM.py | 2 月之前 | |
MiniCPMTest.ipynb | 2 月之前 | |
README.md | 4 月之前 | |
config.json | 2 月之前 | |
configuration_minicpm.py | 2 月之前 | |
generation_config.json | 4 月之前 | |
gitattributes | 4 月之前 | |
special_tokens_map.json | 4 月之前 | |
tokenizer.json | 4 月之前 | |
tokenizer.model | 4 月之前 | |
tokenizer_config.json | 4 月之前 |
language:
MiniCPM 技术报告 Technical Report | OmniLMM 多模态模型 Multi-modal Model | CPM-C 千亿模型试用 ~100B Model Trial
MiniCPM 是面壁与清华大学自然语言处理实验室共同开源的系列端侧语言大模型,主体语言模型 MiniCPM-2B 仅有 24亿(2.4B)的非词嵌入参数量。
我们将完全开源MiniCPM-2B的模型参数供学术研究和有限商用,以及训练过程中的所有Checkpoint和大部分非专有数据供模型机理研究。
MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings.
We release all model parameters for research and limited commercial use. We also release all the checkpoint during training and most public training data for research on model mechanism.
详细的评测结果位于github仓库
Detailed evaluation results are in github repo
注意:我们发现使用Huggingface生成质量略差于vLLM,因此推荐使用vLLM进行测试。我们正在排查原因。
Notice: We discovered that the quality of Huggingface generation is slightly lower than vLLM, thus benchmarking using vLLM is recommended. We are investigating the cause now.
受限于模型容量,模型的知识记忆较不准确,后续我们将结合RAG方法来增强模型的知识记忆能力。
Due to limitations in model size, the model may experience hallucinatory issues. As DPO model tend to generate longer response, hallucinations are more likely to occur. We will also continue to iterate and improve the MiniCPM model.
To ensure the universality of the model for academic research purposes, we did not conduct any identity training on the model. Meanwhile, as we use ShareGPT open-source corpus as part of the training data, the model may output identity information similar to the GPT series models.
Due to the limitation of model size, the output of the model is greatly influenced by prompt words, which may result in inconsistent results from multiple attempts.
Due to limited model capacity, the model's knowledge memory is not accurate. In the future, we will combine the RAG method to enhance the model's knowledge memory ability.
| HuggingFace | ModelScope | WiseModel | |-------------|------------|-----------| |sft-bf16|sft-bf16|sft-bf16 |sft-fp32|sft-fp32|sft-fp32 |dpo-bf16|dpo-bf16|dpo-bf16 |dpo-fp16|dpo-fp16|dpo-fp16 |dpo-fp32|dpo-fp32|dpo-fp32
transformers>=4.36.0
以及accelerate
后,运行以下代码from_pretrained
中明确指明模型的数据类型,否则会引起较大计算误差transformers>=4.36.0
and accelerate
path = 'OpenBMB/MiniCPM-2B-dpo-bf16' tokenizer = AutoTokenizer.from_pretrained(path) model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map='cuda', trust_remote_code=True)
responds, history = model.chat(tokenizer, "山东省最高的山是哪座山, 它比黄山高还是矮?差距多少?", temperature=0.8, top_p=0.8) print(responds)
* 期望输出 Expected Output
```shell
山东省最高的山是泰山,海拔1545米。
相对于黄山(海拔1864米),泰山海拔较低,相差约319米。
如需将模型用于商业用途,请联系cpm@modelbest.cn来获取书面授权,在登记后亦允许免费商业使用。
This repository is released under the Apache-2.0 License.
The usage of MiniCPM model weights must strictly follow the General Model License (GML).
The models and weights of MiniCPM are completely free for academic research.
If you intend to utilize the model for commercial purposes, please reach out to cpm@modelbest.cn to obtain the certificate of authorization.
如果由于使用 MinCPM 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
As a language model, MiniCPM generates content by learning from a vast amount of text.
However, it does not possess the ability to comprehend or express personal opinions or value judgments.
Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers.
Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own.
Please cite our techinical report if you find our work valuable.
@inproceedings{minicpm2024,
title={MiniCPM:Unveiling the Potential of End-side Large Language Models},
booktitle={OpenBMB Blog},
year={2024}
}