in

High-quality-tune MPT-7B on Amazon SageMaker | by João Pereira | Jun, 2023


Discover ways to put together a dataset and create a coaching job to fine-tune MPT-7B on Amazon SageMaker

New massive language fashions (LLMs) are being introduced each week, every attempting to beat its predecessor and take over the analysis leaderboards. One of many newest fashions out there’s MPT-7B that was launched by MosaicML. Not like different fashions of its type, this 7-billion-parameter mannequin is open-source and licensed for business use (Apache 2.0 license) 🚀.

Basis fashions like MPT-7B are pre-trained on datasets with trillions of tokens (100 tokens ~ 75 phrases) crawled from the online and, when prompted properly, they will produce spectacular outputs. Nevertheless, to actually unlock the worth of huge language fashions in real-world functions, sensible prompt-engineering may not be sufficient to make them work in your use case and, subsequently, fine-tuning a basis mannequin on a domain-specific dataset is required.

LLMs have billions of parameters and, consequently, fine-tuning such massive fashions is difficult. Excellent news is that fine-tuning is less expensive and sooner as in comparison with pre-training the inspiration mannequin provided that 1) the domain-specific datasets are “small” and a pair of) fine-tuning requires only some passes over the coaching knowledge.

Here’s what we’ll be taught on this article:

  • How one can create and construction a dataset for fine-tuning a big language mannequin.
  • What’s and configure a distributed coaching job with totally sharded knowledge parallel.
  • How one can outline a 😊 HuggingFace estimator.
  • How one can launch a coaching job in Amazon SageMaker that fine-tunes MPT-7B.

Let’s begin by putting in the SageMaker Python SDK and some different packages. This SDK makes it doable to coach and deploy machine studying fashions on AWS with a couple of strains of Python code. The code under is obtainable within the sagemaker_finetuning.ipynbpocket book in Github. Run the pocket book in SageMaker Studio, a SageMaker pocket book occasion, or in your laptop computer after authenticating to an AWS account.

!pip set up "sagemaker==2.162.0" s3path boto3 --quiet

from sagemaker.huggingface import HuggingFace
from sagemaker.inputs import TrainingInput
from sagemaker import s3_utils
import sagemaker
import boto3
import json

Subsequent step is to outline the paths the place the information will likely be saved in S3 and create a SageMaker session.

# Outline S3 paths
bucket = "<YOUR-S3-BUCKET>"
training_data_path = f"s3://{bucket}/toy_data/practice/knowledge.jsonl"
test_data_path = f"s3://{bucket}/toy_data/take a look at/knowledge.jsonl"
output_path = f"s3://{bucket}/outputs"
code_location = f"s3://{bucket}/code"

# Create SageMaker session
sagemaker_session = sagemaker.Session()
area = sagemaker_session.boto_region_name
position = sagemaker.get_execution_role()

We are going to create a dummy dataset to display fine-tune MPT-7B. Since coaching fashions of this dimension on a whole dataset takes lengthy and is expensive, it’s a good suggestion to first take a look at & debug the coaching job on a small dataset and second scale coaching to the whole dataset.

  • Format dataset as an inventory of dictionaries — The dataset ought to be formatted as an inventory of dictionaries, the place every instance has a key-value construction, e.g.,
{
"immediate": "What's a Pastel de Nata?",
"response": "A Pastel de Nata is a Portuguese egg custard tart pastry, optionally dusted with cinnamon."
}

The immediate is the enter given to the mannequin (e.g., a query). The response is the output that the mannequin is skilled to foretell (e.g., the reply to the query within the immediate). The uncooked immediate is usually preprocessed to slot in a immediate template that helps the mannequin to generate higher outputs. Observe that the mannequin is skilled for causal language modelling, so you possibly can consider it as a “doc completer”. It’s a good suggestion to design the immediate template in such a method that the mannequin thinks that it’s finishing a doc. Andrej Karpathy explains properly this mechanism in his discuss State of GPT.

prompt_template = """Write a response that appropriately solutions the query under.
### Query:
{query}

### Response:
"""

dataset = [
{"prompt": "What is a Pastel de Nata?",
"response": "A Pastel de Nata is a Portuguese egg custard tart pastry, optionally dusted with cinnamon."},
{"prompt": "Which museums are famous in Amsterdam?",
"response": "Amsterdam is home to various world-famous museums, and no trip to the city is complete without stopping by the Rijksmuseum, Van Gogh Museum, or Stedelijk Museum."},
{"prompt": "Where is the European Parliament?",
"response": "Strasbourg is the official seat of the European Parliament."},
{"prompt": "How is the weather in The Netherlands?",
"response": "The Netherlands is a country that boasts a typical maritime climate with mild summers and cold winters."},
{"prompt": "What are Poffertjes?",
"response": "Poffertjes are a traditional Dutch batter treat. Resembling small, fluffy pancakes, they are made with yeast and buckwheat flour."},
]

# Format immediate primarily based on template
for instance in dataset:
instance["prompt"] = prompt_template.format(query=instance["prompt"])

training_data, test_data = dataset[0:4], dataset[4:]

print(f"Measurement of coaching knowledge: {len(training_data)}nSize of take a look at knowledge: {len(test_data)}")

  • Add the coaching and take a look at knowledge to S3 — As soon as the coaching and take a look at units are prepared and formatted as an inventory of dictionaries, we add them to S3 as JSON strains utilizing the utility operate under:
def write_jsonlines_to_s3(knowledge, s3_path):
"""Writes record of dictionaries as a JSON strains file to S3"""

json_string = ""
for d in knowledge:
json_string += json.dumps(d) + "n"

s3_client = boto3.consumer("s3")

bucket, key = s3_utils.parse_s3_url(s3_path)
s3_client.put_object(
Physique = json_string,
Bucket = bucket,
Key = key,
)

write_jsonlines_to_s3(training_data, training_data_path)
write_jsonlines_to_s3(test_data, test_data_path)

With the datasets accessible in S3, we’ll now create a coaching job in Amazon SageMaker. For that, we’ve to create an entry level script, modify the configuration file specifying the coaching settings, and outline an HuggingFace estimator. We are going to (re-)use the coaching script from LLM Foundry and Composer library’s CLI launcher that units up the distributed coaching surroundings. Each of those packages are maintained by MosaicML, the corporate behind MPT-7B. The working folder ought to be structured like:

└── fine-tune-mpt-7b-sagemaker/
├── training_script_launcher.sh
├── fine_tuning_config.yaml
├── sagemaker_finetuning.ipynb

We are going to now dive deep into every of those recordsdata.

  • Create a configuration file finetuning_config.yaml — The template supplied within the LLM Foundry repository is an efficient place to begin, particularly the mpt-7b-dolly-sft.yaml file. Nevertheless, relying in your dataset dimension and coaching occasion, you might need to regulate a few of these configurations, such because the batch dimension. I’ve modified the file to fine-tune the mannequin in SageMaker (verify finetuning_config.yaml). The parameters that it’s best to take note of are the next:
max_seq_len: 512
global_seed: 17

...
# Dataloaders
train_loader:
title: finetuning
dataset:
hf_name: json
hf_kwargs:
data_dir: /decide/ml/enter/knowledge/practice/
...

eval_loader:
title: finetuning
dataset:
hf_name: json
hf_kwargs:
data_dir: /decide/ml/enter/knowledge/take a look at/

...
max_duration: 3ep
eval_interval: 1ep
...
global_train_batch_size: 128

...
# FSDP
fsdp_config:
sharding_strategy: FULL_SHARD
mixed_precision: PURE
activation_checkpointing: true
activation_checkpointing_reentrant: false
activation_cpu_offload: false
limit_all_gathers: true
verbose: false

# Checkpoint to native filesystem or distant object retailer
save_folder: /tmp/checkpoints
dist_timeout: 2000

Themax_seq_length signifies the utmost variety of tokens of the enter (do not forget that 100 tokens ~ 75 phrases). The coaching and take a look at knowledge will likely be loaded utilizing the 😊 Datasets library from the /decide/ml/enter/knowledge/{practice, take a look at} listing contained in the container related to the coaching job. Try the SageMaker Training Storage Folders’ documentation to grasp how the container directories are structured. The max_duration specifies the variety of epochs for fine-tuning. Two to a few epochs is often a good selection. eval_interval signifies how typically the mannequin will likely be evaluated on the take a look at set.

The distributed coaching technique is Totally Sharded Information Parallel (FSDP), which permits environment friendly coaching of huge fashions like MPT-7B. Not like the normal knowledge parallel technique, which retains a duplicate of the mannequin in every GPU, FSDP shards mannequin parameters, optimizer states, and gradients throughout knowledge parallel staff. If you wish to be taught extra about FSDP, verify this insightful PyTorch intro post. FSDP is built-in in Composer, the distributed coaching library utilized by LLM Foundry.

save_folder determines the place the mannequin checkpoint (.pt file) is saved. We set it to the momentary folder /tmp/checkpoints.

  • Create the entry level script launcher.sh— A bash script is used as entry level. The bash script clones the LLM Foundry repository, installs necessities, and, extra importantly, runs the coaching script utilizing Composer library’s distributed launcher. Observe that, sometimes, coaching jobs in SageMaker run the coaching script utilizing a command like python practice.py. Nevertheless, it’s doable to go a bash script as entry level, which supplies extra flexibility in our state of affairs. Lastly, we convert the mannequin checkpoint saved to /tmp/checkpoints to the HuggingFace mannequin format and save the ultimate artifacts into /decide/ml/mannequin/. SageMaker will compress all recordsdata on this listing, create a tarball mannequin.tar.gz, and add it to S3. The tarball is helpful for inference.
# Clone llm-foundry package deal from MosaicML
# That is the place the coaching script is hosted
git clone https://github.com/mosaicml/llm-foundry.git
cd llm-foundry

# Set up required packages
pip set up -e ".[gpu]"
pip set up git+https://github.com/mosaicml/composer.git@dev

# Run coaching script with fine-tuning configuration
composer scripts/practice/practice.py /decide/ml/code/finetuning_config.yaml

# Convert Composer checkpoint to HuggingFace mannequin format
python scripts/inference/convert_composer_to_hf.py
--composer_path /tmp/checkpoints/latest-rank0.pt
--hf_output_path /decide/ml/mannequin/hf_fine_tuned_model
--output_precision bf16

# Print content material of the mannequin artifact listing
ls /decide/ml/mannequin/

  • Outline 😊 HuggingFace Estimator — The Estimator units the Docker container used to run the coaching job. We are going to use a picture with PyTorch 2.0.0 and Python 3.10. The bash script and the configuration file are robotically uploaded to S3 and made accessible contained in the container (dealt with by the SageMaker Python SDK). We set the coaching occasion tog5.48xlarge that has 8x NVIDIA A10G GPUs. The p4d.24xlarge can also be a good selection. Regardless that it’s costlier, it’s geared up with 8x NVIDIA A100 GPUs. We additionally point out the metrics to trace on the coaching and take a look at units (Cross Entropy and Perplexity). The values of those metrics are captured by way of Regex expressions and despatched to Amazon CloudWatch.
# Outline container picture for the coaching job
training_image_uri = f"763104351884.dkr.ecr.{area}.amazonaws.com/huggingface-pytorch-training:2.0.0-transformers4.28.1-gpu-py310-cu118-ubuntu20.04-v1.1"

# Outline metrics to ship to CloudWatch
metrics = [
# On training set
(.d+)))",
(.d+)))",
# On take a look at set
(.d+)))",
(.d+)))",
]

estimator_args = {
"image_uri": training_image_uri, # Coaching container picture
"entry_point": "launcher.sh", # Launcher bash script
"source_dir": ".", # Listing with launcher script and configuration file
"instance_type": "ml.g5.48xlarge", # Occasion kind
"instance_count": 1, # Variety of coaching cases
"base_job_name": "fine-tune-mpt-7b", # Prefix of the coaching job title
"position": position, # IAM position
"volume_size": 300, # Measurement of the EBS quantity connected to the occasion (GB)
"py_version": "py310", # Python model
"metric_definitions": metrics, # Metrics to trace
"output_path": output_path, # S3 location the place the mannequin artifact will likely be uploaded
"code_location": code_location, # S3 location the place the supply code will likely be saved
"disable_profiler": True, # Don't create profiler occasion
"keep_alive_period_in_seconds": 240, # Allow Heat Swimming pools whereas experimenting
}

huggingface_estimator = HuggingFace(**estimator_args)

⚠️ Be sure that to request the respective quotas for SageMaker Coaching, together with Warm Pools’ quota in case you’re making use of this cool function. Should you plan to run many roles in SageMaker, check out SageMaker Saving Plans.

  • Launch the coaching job 🚀 — We’ve all set to start out the coaching job on Amazon SageMaker:
huggingface_estimator.match({
"practice": TrainingInput(
s3_data=training_data_path,
content_type="software/jsonlines"),
"take a look at": TrainingInput(
s3_data=test_data_path,
content_type="software/jsonlines"),
}, wait=True)

The coaching time will rely upon the scale of your dataset. With our dummy dataset, coaching takes roughly 20min to finish. As soon as the mannequin is skilled and transformed to 😊 HuggingFace format, SageMaker will add the mannequin tarball (mannequin.tar.gz) to the S3 output_path. I discovered that in follow the importing step takes relatively lengthy (>1h), which may be as a result of dimension of the mannequin artifacts to compress (~25GB).

On this article, I confirmed how one can put together a dataset and create a coaching job in SageMaker to fine-tune MPT-7B in your use case. The implementation leverages the coaching script from LLM Foundry and makes use of Composer library’s distributed coaching launcher. After you have fine-tuned your mannequin and wish to deploy it, I like to recommend to take a look at the blog posts by Philipp Schmid; there are many examples on deploy LLMs in SageMaker. Have enjoyable along with your fine-tuned MPT-7B mannequin! 🎉


Voicebox: Meta’s AI Instrument For Speech Technology

Boto3 vs AWS Wrangler: Simplifying S3 Operations with Python | by Antonello Benedetto | Jun, 2023