Metadata-Version: 2.1
Name: plinear
Version: 0.1.2.3
Summary: parallel neural network layer for binarization of ternarization - quantized layers from the beginning
License: MIT
Author: your_name
Author-email: your_email@example.com
Requires-Python: >=3.10.12,<4.0.0
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Dist: imageio (>=2.34.2,<3.0.0)
Requires-Dist: numpy (>=1.25.2,<2.0.0)
Requires-Dist: torch (>=2.2.2,<3.0.0)
Requires-Dist: torchaudio (>=2.2.2,<3.0.0)
Requires-Dist: torchvision (>=0.17.2,<0.18.0)
Description-Content-Type: text/markdown

# plinear

Github for parrallel - linear layer

You can install PLinear using pip:

```sh
pip install plinear
```

### Idea inspired from

https://arxiv.org/pdf/2402.17764?trk=public_post_comment-text

### Code inspired from

https://github.com/kyegomez/BitNet/blob/main/bitnet/bitlinear.py

# Ideas and Road Map

## Parrallel neural network (PLinear)

#### Layer composition

Binarizing ternary layers by making posNet and negNet and add them.

Both are created with posNet, which returns 1 if the weight if over 0 and 0 else.

Result comes out with posNet - negNet to mimic ternary.

Found out that tanh(weight) makes the model to fit in and learn without normalizing entire layer.

No additional activation function used in test.

#### Suggested usage

```
import torch
import torch.nn as nn
from plinear import PLinear

class SimpleNN(nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.fc1 = PLinear(28*28, 128)
        self.fc2 = PLinear(128, 10)

    def forward(self, x):
        x = torch.flatten(x, 1)
        x = self.fc1(x)
        x = self.fc2(x)
        return x

# Example usage
model = SimpleNN()
print(model)

```

#### Test code for Mnist example

```
pytest -k mnist
```

Results can be found in tests/result_mnist.

You are offered with precision, accuracy, recall per epochs.

Also confusion matrix and full visualization of weights per epochs will be offered in animation.

## visualization (Not finised for documentation)

## Brute Force optimization of 3 x 3 CNN (Only Idea)

Since I parrallelized layers, each 3 x 3 CNN layer can be brute forced in 2^9 \* 2 weights for ternary, which is very cheap against previous models.

Even the model is same, the layer is still at least 9 times smaller even if the model seeked through every cases.

And we can reduce the model with simple searching tasks.

I believe this can be used to vectorize images in proper size of vector which can be reused for image generation or more.

I guess vectorizing concepts and dynamically allocating them with layers would be the final goal of this project.

## complex layers (Only Idea)

We can add two more layers parrallelly to express complex domain.

There will be real output and complex output which can be used in many form.

If the layer is complex, default complex part will be zero and it can be passed from previous layers ofcourse.

# Developer Note

### 15, July, 2024

Checked plinear works on colab

### 16, July, 2024

version 0.1.2.2.
version 0.1.2.3.

# changelog

#### 0.1.2.2.

Documented readme.md

Preflight testing done for mnist both layer and visualization.

#### 0.1.2.3

Integrated posNet, negNet functions to posNet.

Layer now does posNet - negNet instead of posNet + negNet since negNet is not negative in real now.

Weight is now processed with tanh and shows much stable learning curve.

Removed test result from the git.

