Metadata-Version: 2.1
Name: fmbench
Version: 1.0.28
Summary: Benchmark performance of **any model** on **any supported instance type** on Amazon SageMaker.
Home-page: https://github.com/aws-samples/foundation-model-benchmarking-tool
License: MIT
Keywords: benchmarking,sagemaker,generative-ai,foundation-models
Author: Amit Arora
Author-email: aroraai@amazon.com
Requires-Python: >=3.11,<4.0
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Dist: boto3 (>=1.34.32,<2.0.0)
Requires-Dist: datasets (==2.16.1)
Requires-Dist: ipywidgets (==8.1.1)
Requires-Dist: jupyter (>=1.0.0,<2.0.0)
Requires-Dist: pandas (==2.1.4)
Requires-Dist: papermill (>=2.5.0,<3.0.0)
Requires-Dist: pyyaml
Requires-Dist: requests (>=2.31.0,<3.0.0)
Requires-Dist: sagemaker (==2.203.0)
Requires-Dist: seaborn (==0.13.1)
Requires-Dist: tomark (==0.1.4)
Requires-Dist: transformers (==4.36.2)
Project-URL: Repository, https://github.com/aws-samples/foundation-model-benchmarking-tool
Description-Content-Type: text/markdown

# Foundation Model benchmarking tool (FMBench) built using Amazon SageMaker

![Foundation Model Benchmarking Tool](https://github.com/aws-samples/foundation-model-benchmarking-tool/blob/main/img/fmbt-small.png?raw=true)

A key challenge with FMs is the ability to benchmark their performance in terms of inference latency, throughput and cost so as to determine which model running with what combination of the hardware and serving stack provides the best price-performance combination for a given workload.

Stated as **business problem**, the ask is “_*What is the dollar cost per transaction for a given generative AI workload that serves a given number of users while keeping the response time under a target threshold?*_”

But to really answer this question, we need to answer an **engineering question** (an optimization problem, actually) corresponding to this business problem: “*_What is the minimum number of instances N, of most cost optimal instance type T, that are needed to serve a workload W while keeping the average transaction latency under L seconds?_*”

*W: = {R transactions per-minute, average prompt token length P, average generation token length G}*

This foundation model benchmarking tool (a.k.a. `FMBench`) is a tool to answer the above engineering question and thus answer the original business question about how to get the best price performance for a given workload. Here is one of the plots generated by `FMBench` to help answer the above question (_the numbers on the y-axis, transactions per minute and latency have been removed from the image below, you can find them in the actual plot generated on running `FMBench`_).

![business question](https://github.com/aws-samples/foundation-model-benchmarking-tool/blob/main/img/business_summary.png?raw=true)

## New in this release

1. Support for HuggingFace datasets as well as bring your own datasets, more [here](https://github.com/aws-samples/foundation-model-benchmarking-tool?tab=readme-ov-file#bring-your-own-dataset--endpoint).

1. Support for external endpoints. No longer limited to Amazon SageMaker endpoints, more [here](https://github.com/aws-samples/foundation-model-benchmarking-tool?tab=readme-ov-file#bring-your-own-dataset--endpoint).

1. Bring your own `Amazon SageMaker` endpoints. If you have an already deployed SageMaker endpoint you can now test it with `FMBench`.

1. Added config files for [`Mistral-7B-Instruct`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2), [`Mistral-7B-Instruct-v0.2-AWQ`](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-AWQ), `huggingface-tc-distilbert-base-uncased` (from SageMaker JumpStart), `meta-textgenerationneuron-llama-2-70b-f` (on AWS Inferentia2).

## Key Features

1. Benchmark any model on any serving stack as long as it can be deployed on Amazon SageMaker.

1. Bring your own script for model deployment if the model is not natively available via Amazon SageMaker JumpStart. 

1. Bring your own tokenizer for your model, configure any inference container parameters you need.

1. Auto-generated reports comparing and contrasting different serving options.

## Installation

1. Launch the AWS CloudFormation template included in this repository using one of the buttons from the table below. The CloudFormation template creates the following resources within your AWS account: Amazon S3 buckets, Amazon IAM role and an Amazon SageMaker Notebook with this repository cloned. A read S3 bucket is created which contains all the files (configuration files, datasets) required to run `FMBench` and a write S3 bucket is created which will hold the metrics and reports generated by `FMBench`. The CloudFormation stack takes about 5-minutes to create.

   |AWS Region                |     Link        |
   |:------------------------:|:-----------:|
   |us-east-1 (N. Virginia)    | [<img src="https://github.com/aws-samples/foundation-model-benchmarking-tool/blob/main/img/ML-FMBT-cloudformation-launch-stack.png?raw=true">](https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/new?stackName=fmbench&templateURL=https://aws-blogs-artifacts-public.s3.amazonaws.com/artifacts/ML-FMBT/template.yml) |

1. Once the CloudFormation stack is created, navigate to SageMaker Notebooks and open the `fmbench-notebook`.

1. On the `fmbench-notebook` open a Terminal and run the following commands.

    ```{.bash}
    conda create --name fmbench_python311 -y python=3.11 ipykernel
    source activate fmbench_python311;
    pip install -U fmbench
    ```

## Steps to run

1. Now you are ready to `fmbench` with the following command line. We will use a sample config file placed in the S3 bucket by the CloudFormation stack for a quick first run.
    
    1. We benchmark performance for the `Llama2-7b` model on a `ml.g5.xlarge` and a `ml.g5.2xlarge` instance type, using the `huggingface-pytorch-tgi-inference` inference container. This test would take about 30 minutes to complete and cost about $0.20.
    
    1. It uses a simple relationship of 750 words equals 1000 tokens, to get a more accurate representation of token counts use the `Llama2 tokenizer` (instructions are provided in the next section). ***It is strongly recommended that for more accurate results on token throughput you use a tokenizer specific to the model you are testing rather than the default tokenizer. See instructions provided later in this document on how to use a custom tokenizer***.

        ```{.bash}
        account=`aws sts get-caller-identity | jq .Account | tr -d '"'`
        fmbench --config-file s3://sagemaker-fmbench-read-${account}/configs/config-llama2-7b-g5-quick.yml
        ```

1. The generated reports and metrics are available in the `sagemaker-fmbench-write-<replace_w_your_aws_account_id>` bucket. The metrics and report files are also downloaded locally and in the `results` directory (created by `FMBench`) and the benchmarking report is available as a markdown file called `report.md` in the `results` directory. You can view the rendered Markdown report in the SageMaker notebook itself or download the metrics and report files to your machine for offline analysis.

## License

[MIT-0](https://github.com/aws-samples/foundation-model-benchmarking-tool/blob/main/LICENSE)

## Documentation

The official documentation is available in the [GitHub repo](https://github.com/aws-samples/foundation-model-benchmarking-tool).

