Metadata-Version: 2.1
Name: datastew
Version: 0.2.0
Summary: Intelligent data steward toolbox using Large Language Model embeddings for automated Data-Harmonization.
Home-page: https://github.com/SCAI-BIO/index
Author: Tim Adams
Author-email: tim.adams@scai.fraunhofer.de
License: Apache-2.0 license
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: matplotlib~=3.8.1
Requires-Dist: numpy==1.25.2
Requires-Dist: openai~=0.28.0
Requires-Dist: openpyxl
Requires-Dist: pandas==2.1.0
Requires-Dist: pip==21.3.1
Requires-Dist: plotly~=5.17.0
Requires-Dist: python-dateutil==2.8.2
Requires-Dist: python-dotenv~=1.0.0
Requires-Dist: pytz==2023.3
Requires-Dist: seaborn~=0.13.0
Requires-Dist: sentence-transformers==2.3.1
Requires-Dist: setuptools==60.2.0
Requires-Dist: scikit-learn==1.3.2
Requires-Dist: six==1.16.0
Requires-Dist: thefuzz~=0.20.0
Requires-Dist: tzdata==2023.3
Requires-Dist: wheel==0.37.1
Requires-Dist: aiofiles~=0.7.0
Requires-Dist: python-multipart
Requires-Dist: SQLAlchemy~=2.0.27
Requires-Dist: scipy~=1.11.4
Requires-Dist: pydantic~=1.10.14


# datastew

![tests](https://github.com/SCAI-BIO/datastew/actions/workflows/tests.yml/badge.svg) ![GitHub Release](https://img.shields.io/github/v/release/SCAI-BIO/datastew)

Datastew is a python library for intelligent data harmonization using Large Language Model (LLM) vector embeddings. 
## Installation

```bash
pip install datastew
```

## Usage

### Harmonizing excel/csv resources

You can directly import common data models, terminology sources or data dictionaries for harmonization directly from a
csv, tsv or excel file. An example how to match two seperate variable descriptions is shown in
[datastew/scripts/mapping_excel_example.py](datastew/scripts/mapping_excel_example.py):

```python
from datastew.process.parsing import DataDictionarySource
from datastew.process.mapping import map_dictionary_to_dictionary

# Variable and description refer to the corresponding column names in your excel sheet
source = DataDictionarySource("source.xlxs", variable_field="var", description_field="desc")
target = DataDictionarySource("target.xlxs", variable_field="var", description_field="desc")

df = map_dictionary_to_dictionary(source, target)
df.to_excel("result.xlxs")
```

The resulting file contains the pairwise variable mapping based on the closest similarity for all possible matches 
as well as a similarity measure per row.

Per default this will use the local MPNet model, which may not yield the optimal performance. If you got an OpenAI API 
key it is possible to use their embedding API instead. To use your key, create an OpenAIAdapter model and pass it to the 
function:

```python
from datastew.embedding import GPT4Adapter

embedding_model = GPT4Adapter(key="your_api_key")
df = map_dictionary_to_dictionary(source, target, embedding_model=embedding_model)
```

### Creating and using stored mappings

A simple example how to initialize an in memory database and compute a similarity mapping is shown in 
[datastew/scripts/mapping_db_example.py](datastew/scripts/mapping_db_example.py):

```python
from datastew.repository.sqllite import SQLLiteRepository
from datastew.repository.model import Terminology, Concept, Mapping
from datastew.embedding import MPNetAdapter

# omit mode to create a permanent db file instead
repository = SQLLiteRepository(mode="memory")
embedding_model = MPNetAdapter()

terminology = Terminology("snomed CT", "SNOMED")

text1 = "Diabetes mellitus (disorder)"
concept1 = Concept(terminology, text1, "Concept ID: 11893007")
mapping1 = Mapping(concept1, text1, embedding_model.get_embedding(text1))

text2 = "Hypertension (disorder)"
concept2 = Concept(terminology, text2, "Concept ID: 73211009")
mapping2 = Mapping(concept2, text2, embedding_model.get_embedding(text2))

repository.store_all([terminology, concept1, mapping1, concept2, mapping2])

text_to_map = "Sugar sickness"
embedding = embedding_model.get_embedding(text_to_map)
mappings, similarities = repository.get_closest_mappings(embedding, limit=2)
for mapping, similarity in zip(mappings, similarities):
    print(f"Similarity: {similarity} -> {mapping}")
```

output:

```plaintext
Similarity: 0.47353370635583486 -> Concept ID: 11893007 : Diabetes mellitus (disorder) | Diabetes mellitus (disorder)
Similarity: 0.20031612264852067 -> Concept ID: 73211009 : Hypertension (disorder) | Hypertension (disorder)
```

You can also import data from file sources (csv, tsv, xlsx) or from a public API like OLS. An example script to
download & compute embeddings for SNOMED from ebi OLS can be found in 
[datastew/scripts/ols_snomed_retrieval.py](datastew/scripts/ols_snomed_retrieval.py).
