You can read more about them in Datasets tutorial
For now we will go with an instruction dataset.
A datasets will have two .jsonl files, one for train and one for eval (eval is optional but recommended)
For example:
create a train.jsonl file:
{"instruction": "Some important task with context ?", "output": "Boo" }{"instruction": "Some important task with context ?", "output": "Boo" }{"instruction": "Some important task with context ?", "output": "Boo" }
You can view all our models in the Models Page
You can then select a model and dataset and start training.
You can also send fine tune directly from the Dashboard
Once you have your checkpoints, you can create an endpoint to serve your fine-tuned model:
# Create a list of checkpoints to usecheckpoints_list = [{ "id": checkpoint['id'], "name": "step_" + str(checkpoint['step'])} for checkpoint in checkpoints]# Create the endpointendpoint_id = client.create_multi_lora_endpoint( name="My Endpoint New", lora_checkpoints=checkpoints_list, compute="A100-40GB")# Wait for the endpoint to be readyendpoint = client.wait_for_endpoint_ready(endpoint_id=endpoint_id)
You can use your fine-tuned model through the OpenAI-compatible API:
from openai import OpenAIopenai_endpoint_url = endpoint["url"]client = OpenAI( api_key="your_api_key_here", base_url=f"{openai_endpoint_url}/v1")completion = client.completions.create( model="meta-llama/Llama-3.2-1B-Instruct", prompt="Translate the following English text to French", max_tokens=60)print(completion.choices[0].text)