Metadata-Version: 2.1
Name: timeexecution
Version: 4.0.0a2
Summary: Python project
Home-page: https://github.com/kpn-digital/py-timeexecution
Author: KPN DE Platform
Author-email: de-platform@kpn.com
License: UNKNOWN
Description: Time Execution
        ==============
        
        .. image:: https://secure.travis-ci.org/kpn-digital/py-timeexecution.svg?branch=master
            :target:  http://travis-ci.org/kpn-digital/py-timeexecution?branch=master
        
        .. image:: https://img.shields.io/codecov/c/github/kpn-digital/py-timeexecution/master.svg
            :target: http://codecov.io/github/kpn-digital/py-timeexecution?branch=master
        
        .. image:: https://img.shields.io/pypi/v/timeexecution.svg
            :target: https://pypi.python.org/pypi/timeexecution
        
        .. image:: https://readthedocs.org/projects/py-timeexecution/badge/?version=latest
            :target: http://py-timeexecution.readthedocs.org/en/latest/?badge=latest
        
        
        This package is designed to record metrics of the application into a backend.
        With the help of grafana_ you can easily create dashboards with them
        
        
        Features
        --------
        
        - Sending data to multiple backends
        - Custom backends
        - Hooks
        
        Backends
        --------
        
        - InfluxDB 0.8
        - Elasticsearch 5
        - Kafka
        
        
        Installation
        ------------
        
        If you want to use it with `ElasticSearchBackend`
        
        .. code-block:: bash
        
            $ pip install timeexecution[elasticsearch]
        
        with `InfluxBackend`
        
        .. code-block:: bash
        
            $ pip install timeexecution[influxdb]
        
        with `KafkaBackend`
        
        .. code-block:: bash
        
            $ pip install timeexecution[kafka]
        
        or if you prefer to have all backend to easily switch between them
        
        .. code-block:: bash
        
            $ pip install timeexecution[all]
        
        
        Usage
        -----
        
        To use this package you decorate the functions you want to time its execution.
        Every wrapped function will create a metric consisting of 3 default values:
        
        - `name` - The name of the series the metric will be stored in
        - `value` - The time it took in ms for the wrapped function to complete
        - `hostname` - The hostname of the machine the code is running on
        
        See the following example
        
        .. code-block:: python
        
            from time_execution import settings, time_execution
            from time_execution.backends.influxdb import InfluxBackend
            from time_execution.backends.elasticsearch import ElasticsearchBackend
        
            # Setup the desired backend
            influx = InfluxBackend(host='influx', database='metrics', use_udp=False)
            elasticsearch = ElasticsearchBackend('elasticsearch', index='metrics')
        
            # Configure the time_execution decorator
            settings.configure(backends=[influx, elasticsearch])
        
            # Wrap the methods where u want the metrics
            @time_execution
            def hello():
                return 'World'
        
            # Now when we call hello() and we will get metrics in our backends
            hello()
        
        This will result in an entry in the influxdb
        
        .. code-block:: json
        
            [
                {
                    "name": "__main__.hello",
                    "columns": [
                        "time",
                        "sequence_number",
                        "value",
                        "hostname",
                    ],
                    "points": [
                        [
                            1449739813939,
                            1111950001,
                            312,
                            "machine.name",
                        ]
                    ]
                }
            ]
        
        And the following in Elasticsearch
        
        .. code-block:: json
        
            [
                {
                    "_index": "metrics-2016.01.28",
                    "_type": "metric",
                    "_id": "AVKIp9DpnPWamvqEzFB3",
                    "_score": null,
                    "_source": {
                        "timestamp": "2016-01-28T14:34:05.416968",
                        "hostname": "dfaa4928109f",
                        "name": "__main__.hello",
                        "value": 312
                    },
                    "sort": [
                        1453991645416
                    ]
                }
            ]
        
        It's also possible to run backend in different thread with logic behind it, to send metrics in bulk mode.
        
        For example:
        
         .. code-block:: python
        
            from time_execution import settings, time_execution
            from time_execution.backends.threaded import ThreadedBackend
        
            # Setup threaded backend which will be run on separate thread
            threaded_backend = ThreadedBackend(
                backend=ElasticsearchBackend,
                backend_kwargs={
                    "host" : "elasticsearch",
                    "index": "metrics",
                }
            )
        
            # there is also possibility to configure backend by import path, like:
            threaded_backend = ThreadedBackend(
                backend="time_execution.backends.kafka.KafkaBackend",
                #: any other configuration belongs to backend
                backend_kwargs={
                    "hosts" : "kafka",
                    "topic": "metrics"
                }
            )
        
            # Configure the time_execution decorator
            settings.configure(backends=[threaded_backend])
        
            # Wrap the methods where u want the metrics
            @time_execution
            def hello():
                return 'World'
        
            # Now when we call hello() we put metrics in queue to send it either in some configurable time later
            # or when queue will reach configurable limit.
            hello()
        
        It's also possible to decorate coroutines or awaitables in Python >=3.5.
        
        For example:
        
        .. code-block:: python
        
            import asyncio
            from time_execution import time_execution_async
        
            # ... Setup the desired backend(s) as described above ...
        
            # Wrap the methods where you want the metrics
            @time_execution_async
            async def hello():
                await asyncio.sleep(1)
                return 'World'
        
            # Now when we schedule hello() we will get metrics in our backends
            loop = asyncio.get_event_loop()
            loop.run_until_complete(hello())
        
        
        Hooks
        -----
        
        `time_execution` supports hooks where you can change the metric before its
        being sent to the backend.
        
        With a hook you can add additional and change existing fields. This can be
        useful for cases where you would like to add a column to the metric based on
        the response of the wrapped function.
        
        A hook will always get 3 arguments:
        
        - `response` - The returned value of the wrapped function
        - `exception` - The raised exception of the wrapped function
        - `metric` - A dict containing the data to be send to the backend
        - `func_args` - Original args received by the wrapped function.
        - `func_kwargs` - Original kwargs received by the wrapped function.
        
        From within a hook you can change the `name` if you want the metrics to be split
        into multiple series.
        
        See the following example how to setup hooks.
        
        .. code-block:: python
        
            # Now lets create a hook
            def my_hook(response, exception, metric, func_args, func_kwargs):
                status_code = getattr(response, 'status_code', None)
                if status_code:
                    return dict(
                        name='{}.{}'.format(metric['name'], status_code),
                        extra_field='foo bar'
                    )
        
            # Configure the time_execution decorator, but now with hooks
            settings.configure(backends=[backend], hooks=[my_hook])
        
        Manually sending metrics
        ------------------------
        
        You can also send any metric you have manually to the backend. These will not
        add the default values and will not hit the hooks.
        
        See the following example.
        
        .. code-block:: python
        
            loadavg = os.getloadavg()
            write_metric('cpu.load.1m', value=loadavg[0])
            write_metric('cpu.load.5m', value=loadavg[1])
            write_metric('cpu.load.15m', value=loadavg[2])
        
        .. _grafana: http://grafana.org/
        
        
        Custom Backend
        --------------
        
        Writing a custom backend is very simple, all you need to do is create a class
        with a `write` method. It is not required to extend `BaseMetricsBackend`
        but in order to easily upgrade I recommend u do.
        
        .. code-block:: python
        
            from time_execution.backends.base import BaseMetricsBackend
        
        
            class MetricsPrinter(BaseMetricsBackend):
                def write(self, name, **data):
                    print(name, data)
        
        
        Contribute
        ----------
        
        You have something to contribute ? Great !
        A few things that may come in handy.
        
        Testing in this project is done via docker. There is a docker-compose to easily
        get all the required containers up and running.
        
        There is a Makefile with a few targets that we use often:
        
        - `make test`
        - `make isort`
        - `make lint`
        - `make build`
        - `make setup.py`
        
        All of these make targets can be prefixed by `docker/`. This will execute
        the target inside the docker container instead of on your local machine.
        For example `make docker/build`.
        
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Environment :: Web Environment
Classifier: Intended Audience :: Developers
Classifier: Operating System :: OS Independent
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.5
Classifier: Topic :: Internet :: WWW/HTTP
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Provides-Extra: all
Provides-Extra: elasticsearch
Provides-Extra: influxdb
Provides-Extra: kafka
