Skip to main content

PyTorch Hub For Researchers

Explore and extend models from the latest cutting edge research.

Discover and publish models to a pre-trained model repository designed for research exploration. Check out the models for Researchers, or learn How It WorksContribute Models.

*This is a beta release – we will be collecting feedback and improving the PyTorch Hub over the coming months.

  • Reset

YOLOv5

Ultralytics YOLOv5 🚀 for object detection, instance segmentation and image classification.

55.2k

MobileNet v2

Efficient networks optimized for speed and memory, with residual blocks

17.1k

ResNet

Deep residual networks pre-trained on ImageNet

17.1k

ResNext

Next generation ResNets, more efficient and accurate

17.1k

ShuffleNet v2

An efficient ConvNet optimized for speed and memory, pre-trained on ImageNet

17.1k

SqueezeNet

Alexnet-level accuracy with 50x fewer parameters.

17.1k

vgg-nets

Award winning ConvNets from 2014 ImageNet ILSVRC challenge

17.1k

Wide ResNet

Wide Residual Networks

17.1k

Deeplabv3

DeepLabV3 models with ResNet-50, ResNet-101 and MobileNet-V3 backbones

17.1k

AlexNet

The 2012 ImageNet winner achieved a top-5 error of 15.3%, more than 10.8 percentage points lower than that of the runner up.

17.1k

Densenet

Dense Convolutional Network (DenseNet), connects each layer to every other layer in a feed-forward fashion.

17.1k

FCN

Fully-Convolutional Network model with ResNet-50 and ResNet-101 backbones

17.1k

Inception_v3

Also called GoogleNetv3, a famous ConvNet trained on ImageNet from 2015

17.1k

Silero Voice Activity Detector

Pre-trained Voice Activity Detector

6.7k

Silero Speech-To-Text Models

A set of compact enterprise-grade pre-trained STT Models for multiple languages.

5.5k

Silero Text-To-Speech Models

A set of compact enterprise-grade pre-trained TTS Models for multiple languages

5.5k

SNNMLP

Brain-inspired Multilayer Perceptron with Spiking Neurons

4.3k

GhostNet

Efficient networks by generating more features from cheap operations

4.3k

Once-for-All

Once-for-all (OFA) decouples training and search, and achieves efficient inference across various edge devices and resource constraints.

1.9k

Open-Unmix

Reference implementation for music source separation

1.4k

SimpleNet

Lets Keep it simple, Using simple architectures to outperform deeper and more complex architectures

53