YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

经过P-Tuning训练的model用于Inference

Usage

# Load the configuration and model:
from peft import PeftModel, PeftConfig

peft_model_id = "Laurie/t5-large_PREFIX_TUNING_SEQ2SEQ"

config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(model, peft_model_id)

# Get and tokenize some text about financial news:
inputs = tokenizer(
"Berkshire Hathaway CEO Warren Buffett on Saturday assailed regulators, politicians and the media for confusing the public about the safety of U.S. banks and said that conditions could worsen.",
return_tensors="pt" )

# Put the model on a GPU and generate the predicted text sentiment:
model.to(device)

with torch.no_grad():
    inputs = {k: v.to(device) for k, v in inputs.items()}
    outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=10)
    print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))

# => ['negative']
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support