Metadata-Version: 2.1
Name: webtech
Version: 1.2
Summary: Identify technologies used on websites
Home-page: https://github.com/ShielderSec/webtech
Author: thezero, polict
Author-email: info@shielder.it
License: GPLv3
Platform: UNKNOWN
Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3)
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: PyPy
Requires-Python: >=2.7.0
Description-Content-Type: text/markdown
Requires-Dist: requests


# webtech
Identify technologies used on websites

## CLI Installation

Simply run the following command in your terminal

```
python setup.py install --user
```

It's important to install webtech in a folder where user can write because it will download the signature database in that folder.


## Burp Integration

Download Jython 2.7.0 standalone and install it into Burp

In "Extender" > "Options" > "Python Environment":
- Select the Jython jar location

Finally, in "Extender" > "Extension":
- Click "Add"
- Select "py" or "Python" as extension format
- Select the `Burp-WebTech.py` file in this folder


## Usage

Scan a website

```
$ webtech -u https://example.com/

Target URL: https://example.com
...

$ webtech -u file://response.txt

Target URL:
...
```

Full usage

```
$ webtech -h

Usage: webtech [options]

Options:
  -h, --help            show this help message and exit
  -u URLS, --urls=URLS  url(s) to scan
  --ul=URLS_FILE, --urls-file=URLS_FILE
                        url(s) list file to scan
  --rf=REQUEST_FILES, --request-files=REQUEST_FILES
                        HTTP request file to replay
  --ua=USER_AGENT, --user-agent=USER_AGENT
                        use this user agent
  --rua, --random-user-agent
                        use a random user agent
  --db=DB_FILE, --database-file=DB_FILE
                        custom database file
  --oj, --json          output json-encoded report
  --og, --grep          output grepable report

```

## Resources for database matching

HTTP Headers information - http://netinfo.link/http/headers.html  
Cookie names - https://webcookies.org/top-cookie-names  

## TODO

- review all the code TODOs
- write a decent README.md  :D

