Falcon-7B-Instruct

Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by TII based on Falcon-7B and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.

Hugging Face:https://huggingface.co/tiiuae/falcon-7b-instruct

要求(Requirements)

  • 需要至少 16GB 内存才能使用 Falcon-7B-Instruct 快速运行推理
  • 需要 PyTorch 2.0 +

依赖项 (Dependency)

pip install -r requirements.txt

快速使用(Quickstart)

from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch

model = "tiiuae/falcon-7b-instruct"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
    device_map="auto",
)
sequences = pipeline(
   "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
    max_length=200,
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")
作者:Jeebiz  创建时间:2023-12-12 12:41
最后编辑:Jeebiz  更新时间:2025-05-12 09:20