Falcon-180B
Falcon-180B is a 180B parameters causal decoder-only model built by TII and trained on 3,500B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Falcon-180B TII License and Acceptable Use Policy.
Hugging Face:https://huggingface.co/tiiuae/falcon-180b
要求(Requirements)
- 需要至少 400GB 内存才能使用 Falcon-180B 快速运行推理。
- 需要 PyTorch 2.0 +
- 要以全精度运行模型推理,bfloat16 您需要大约 8 x A100 80GB 或同等容量
依赖项 (Dependency)
pip install -r requirements.txt
快速使用(Quickstart)
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-180b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
作者:Jeebiz 创建时间:2023-12-12 12:42
最后编辑:Jeebiz 更新时间:2025-05-12 09:20
最后编辑:Jeebiz 更新时间:2025-05-12 09:20