The Ultimate Deep Learning Project Structure: A Software Engineer’s Guide into the Land of AI

The Ultimate Deep Learning Project Structure: A Software Engineer’s Guide into the Land of AI

Mind blown 🤯 by the power of AI art and language models! ChatGPT 🤖 assisted in creating this post #FutureIsNow

Engineering solutions one line at a time 🔧 #TrustMeImAnEngineer

Functional programming? 🛠️ It’s like oiling the gears of your code. Everything runs smoother, faster, and with fewer hiccups. Trust me, taking a functional programming course is time well spent.💡

Coding mastery is like wielding a wizard’s magic wand! 🧙‍♂️💻 With the right spells (keyboard strokes) and incantations (code snippets), you can cast a powerful spell on your code and watch it come to life 🔮👨‍💻 So, sharpen your coding skills, become a ninja wizard, and unleash the magic within! 🌟💻

Say no to the dark side of closed windows and embrace the power of the open-source force! 💻👨‍💻🐧 Search, learn, fix, upgrade, and repeat — the life of a coding Jedi. 🔍 & 💻 Join the ranks of the linux ninjas 🥷🏼 and leave the troubles of proprietary software behind 🕵️‍♂️💻 The terminal is your ally, so embrace it, master the command line, and may the source be with you! 🌟⚡️ #OpenSource #LinuxLife #LifetimeLearning

This is what I did back in 2014 when I started codingMeet CoPilot, your trusty coding companion 🤖 This innovative tool has been revolutionizing the AI world since its launch, and I’ve been lucky enough to use it since day one. With the ability to generate neural networks, CoPilot is truly a coding powerhouse 💪💻 Watch in awe as it effortlessly outputs the date in Python with ease 🔥 #CodingEfficiency #ProductivityBoost 🚀May the code be with you, young Padawan 🚀 Time to harness the power of the Force and conquer that bug with AI! 💻💪

AI coding may test your skills, but remember young Padawan: try, fail and learn you must! ChatGPT, CodeGPT, CoPilot, GitHub, Documentation, DuckDuckGo, and Google, your allies they will be 🤖 #AIJedi

Don’t let spaghetti code bring you down, go modular and watch your code shine 🍝💡🌟 #CleanCodeMatters

from .resnet import ResNet, BasicBlock, Bottleneck

/
|-- models/
| |-- __init__.py
|-- experiments/
| |-- __init__.py
|-- trainers/
| |-- __init__.py
|-- data/
| |-- __init__.py

pip install yerbamate
cd path/to/folder
mate init deepnet

├── deepnet
│ ├── data
│ │ └── __init__.py
│ ├── experiments
│ │ └── __init__.py
│ ├── models
│ │ └── __init__.py
│ └── trainers
│ │ └── __init__.py
│ └── __init__.py
└── mate.json

When you’re so deep into deep learning, you forget to come up for air 🏊‍♂️🤓 #DeepLearningDive #NeverStopLearning

# "Big Transfer (BiT): General Visual Representation Learning"
# Forked from https://github.com/google-research/big_transfer
# Refactored repo: https://github.com/oalee/big_transfer

# You can use complete URLs or the short version.
# Here we use the short version of the following:
# https://github.com/oalee/big_transfer/tree/master/big_transfer/experiments/bit

# Installs the experiment, code and python dependencies with pip
mate install oalee/big_transfer/experiments/bit -y pip

# Installs python dependencies with conda
# Overwrites code dependencies if it already exists
mate install oalee/big_transfer/experiments/bit -yo conda

# Only installs the code without any pip/conda dependencies
mate install oalee/big_transfer/experiments/bit -n

Brewing up some code with Maté 🧉 #ModularMagic 🧩 #CodeCaffeine 💻🤖

.
├── mate.json
└── deepnet
├── data
│ ├── bit
│ │ ├── fewshot.py
│ │ ├── __init__.py
│ │ ├── minibatch_fewshot.py
│ │ ├── requirements.txt
│ │ └── transforms.py
│ └── __init__.py
├── experiments
│ ├── bit
│ │ ├── aug.py
│ │ ├── dependencies.json
│ │ ├── __init__.py
│ │ ├── learn.py
│ │ └── requirements.txt
│ └── __init__.py
├── __init__.py
├── models
│ ├── bit_torch
│ │ ├── downloader
│ │ │ ├── downloader.py
│ │ │ ├── __init__.py
│ │ │ ├── requirements.txt
│ │ │ └── utils.py
│ │ ├── __init__.py
│ │ ├── models.py
│ │ └── requirements.txt
│ └── __init__.py
└── trainers
├── bit_torch
│ ├── __init__.py
│ ├── lbtoolbox.py
│ ├── logger.py
│ ├── lr_schduler.py
│ ├── requirements.txt
│ └── trainer.py
└── __init__.py

On my journey to software engineering for AI 🤖, I’ve looked at json, toml, and many formats others use as well. But I’ve found that Python 🐍 is the best format for defining experiments 💡 It’s a regular python file without any loops 💻 Defining hyperparameters in Python has the advantage of flexibility 💪 as you can easily change modules and classes 🔧 Let the experimentation begin! 🚀

from ...trainers.bit_torch.trainer import test, train
from ...models.bit_torch.models import load_trained_model, get_model_list
from ...data.bit import get_transforms, mini_batch_fewshot
import torchvision as tv, yerbamate, os
from torch.utils.tensorboard import SummaryWriter


# BigTransfer Medium ResNet50 Width 1
model_name = "BiT-M-R50x1"
# Choose a model form get_model_list that can fit in to your memoery
# Try "BiT-S-R50x1" if this doesn't works for you

# set up environment variables
env = yerbamate.Environment()

# set up data and augmentation
train_transform, val_transform = get_transforms(img_size=[32, 32])
data_set = tv.datasets.CIFAR10(
env["datadir"], train=True, download=True, transform=train_transform
)
val_set = tv.datasets.CIFAR10(env["datadir"], train=False, transform=val_transform)
train_set, val_set, train_loader, val_loader = mini_batch_fewshot(
train_set=data_set,
valid_set=val_set,
examples_per_class=None, # Fewshot disabled
batch=128,
batch_split=2,
workers=os.cpu_count(), # Auto-val to cpu count
)

# load pretrained model
imagenet_weight_path = os.path.join(env["weights_path"], f"{model_name}.npz")
model = load_trained_model(
weight_path=imagenet_weight_path, model_name=model_name, num_classes=10
)
# set up logger
logger = SummaryWriter(log_dir=env["results"], comment=env.name)

if env.train:
train(
model=model,
train_loader=train_loader,
valid_loader=val_loader,
train_set_size=len(train_set),
save=True,
save_path=os.path.join(env["results"], f"trained_{model_name}.pt"),
batch_split=2,
base_lr=0.003,
eval_every=100,
log_path=os.path.join(env["results"], "log.txt"),
tensorboardlogger=logger,
)

if env.test:
test(
model=model,
val_loader=val_loader,
save_path=os.path.join(env["results"], f"trained_{model_name}.pt"),
log_path=os.path.join(env["results"], "log.txt"),
tensorboardlogger=logger,
)

A powerful tool, Python is, for young Padawan’s AI experimentation 🐍 The AI revolution begun has, with its flexibility to test and explore 🚀 Mighty its power, to create models adjust hyperparameters 💫 A spell it is, use wisely you must, the Force of the code within it, lies 🔥🧙‍♀️ Go forth and discover, unleash the potential of AI, young Padawan, you shall 🤖 May the code be with you! 💻 Magic you shall witness, as your network trains and learns 🔮🧠

{
"results": "./results",
"datadir": "./data",
"logdir": "./results/logs",
"weights_path": "./weights"
}

When you finally set the environment variables correctly and your training session takes off like a rocket 🚀 You just gotta say, it’s all about the ‘path’ to success! 🔥

# train the model
mate train bit learn
# Alternatively you can directly run with python
python -m deepneet.experiments.bit.learn train

Learning never sleeps, especially when you’re a machine! 🤖💻📚📈🧠

May all your plots look like this and your machine learning model never get stuck in a local minimum and always reach the global optimum! 🤖📈 Transfer learning after 1 minutes reaches 93% and reaches 97% in 30 minutes 🛸 Way much faster than training from scratch 🤸🏻‍♀ See your results with `tensorboard ./results/logs`

Saying goodbye to spaghetti code, hello to Maté-licious modularity 🍝🧉💻 #NoMoreSpaghettiCode #MakeCodeNotWar #ModularityMatters


# installs a fine tuning resnet experiment
mate install https://github.com/oalee/deep-vision/tree/main/deepnet/experiments/resnet

# Short install version of this repo: https://github.com/oalee/deep-vision
# installs a customizable pytorch resnet model implementation
mate install oalee/deep-vision/deepnet/models/resnet

# installs cifar10 data loader for pytorch lightning
mate install oalee/deep-vision/deepnet/data/cifar10

# installs augmentation module seperated from torch image models
mate install oalee/deep-vision/deepnet/data/torch_aug

# installs over 30 Vision in Transformers implementations into models
mate install oalee/deep-vision/deepnet/models/torch_vit

# Or install torch_vit from lucidrains as a non independent module
mate install https://github.com/lucidrains/vit-pytorch/tree/main/vit_pytorch

# installs a pytorch lightning classifier module
mate install oalee/deep-vision/deepnet/trainers/pl_classification

# installs pytorch lightning gan training module from lightweight-gan repo
mate install oalee/lightweight-gan/lgan/trainers/lgan

# Installs the source code of torch image models
mate install https://github.com/rwightman/pytorch-image-models/tree/main/timm

# This doesn't installs dependencies. If you encounter bugs, you can try:
# pip install timm

# Alternativly install from a forked version for installing dependencies
mate install https://github.com/oalee/deep-vision/tree/main/timm -yo pip

from timm.models import create_model, DenseNet, ResNet, EfficientNet, MobileNetV3
from timm.data.auto_augment import augment_and_mix_transform, auto_augment_transform

.
├── deepnet
│ ├── data
│ ├── experiments
│ ├── models
│ └── trainers
├── mate.json
└── timm
├── data
│ ├── auto_augment.py
| ├ ...
├── layers
│ ├── activations_jit.py
│ ├── ...
│ └── weight_init.py
├── loss
│ ├── asymmetric_loss.py
├── ...
│ └── jsd.py
├── models
│ ├── beit.py
│ ├── _builder.py
│ ├── byoanet.py
│ ├── byobnet.py
│ ├── cait.py
│ ├── coat.py
│ ├── convit.py
│ ├── convmixer.py
│ ├── convnext.py
│ ├── crossvit.py
│ ├── cspnet.py
│ ├── davit.py
│ ├── deit.py
│ ├── densenet.py
│ ├── dla.py
│ ├── dpn.py
│ ├── edgenext.py
│ ├── efficientformer.py
│ ├── efficientformer_v2.py
│ ├── ...
│ ├── hub.py
│ ├── inception_resnet_v2.py
│ ├── inception_v3.py
│ ├── inception_v4.py
│ ├── resnest.py
│ ├── resnet.py
│ ├── resnetv2.py
│ ├── rexnet.py
│ ├── selecsls.py
│ ├── senet.py
│ ├── sequencer.py
│ ├── sknet.py
│ ├── swin_transformer.py
│ ├── swin_transformer_v2_cr.py
│ ├── swin_transformer_v2.py
│ ├── tnt.py
│ ├── tresnet.py
│ ├── twins.py
│ ├── vgg.py
│ ├── visformer.py
│ ├── vision_transformer_hybrid.py
│ ├── vision_transformer.py
│ ├── vision_transformer_relpos.py
│ ├── volo.py
│ ├── vovnet.py
│ ├── xception_aligned.py
│ ├── xception.py
│ └── xcit.py
├── optim
│ ├── adabelief.pyy
│ ├── lamb.py
| ├ ...
├── scheduler
│ ├── cosine_lr.py
│ ├── scheduler.py
| ├ ...
├── utils
│ ├── metrics.py
| ├ ...
└── version.py

23 directories, 268 files

All AI/ML model success depend on data, it’s the fuel for the model 🔥 Without good data, augmentations and labeling of data, you won’t have much success, and your model will be like a car with no gas 🚗 So make sure you invest time in collecting, pre-processing, and labeling data, or your model’s predictions may be just a shot in the dark 🎯

mate export
# This will generate python requirements code dependencies for experiments
Generated requirements.txt for deepnet/data/cifar10
Generated requirements.txt for deepnet/data/cifar100
Generated requirements.txt for deepnet/data/keras
Generated requirements.txt for deepnet/data/preprocessing
Generated dependencies.json for deepnet/experiments/resnet
Generated requirements.txt for deepnet/experiments/resnet
Generated requirements.txt for deepnet/experiments/keras_finetune
Generated dependencies.json for deepnet/experiments/resnet_keras
Generated requirements.txt for deepnet/experiments/resnet_keras
Generated requirements.txt for deepnet/models/vit
Generated requirements.txt for deepnet/models/resnet
Generated requirements.txt for deepnet/models/test
Generated requirements.txt for deepnet/models/keras
Generated requirements.txt for deepnet/trainers/classification