SAHI with TorchVision for Sliced Inference¶
0. Preparation¶
- Install latest version of SAHI and Torchvision:
In [ ]:
Copied!
!pip install -U git+https://github.com/obss/sahi
!pip install torch torchvision
!pip install -U git+https://github.com/obss/sahi
!pip install torch torchvision
In [ ]:
Copied!
import os
os.getcwd()
import os
os.getcwd()
- Import required modules:
In [7]:
Copied!
# import required functions, classes
from sahi import AutoDetectionModel
from sahi.predict import get_sliced_prediction, predict, get_prediction
from sahi.utils.file import download_from_url
from sahi.utils.cv import read_image
from IPython.display import Image
# import required functions, classes
from sahi import AutoDetectionModel
from sahi.predict import get_sliced_prediction, predict, get_prediction
from sahi.utils.file import download_from_url
from sahi.utils.cv import read_image
from IPython.display import Image
In [8]:
Copied!
# set torchvision FasterRCNN model
import torchvision
from torchvision.models.detection import FasterRCNN_ResNet50_FPN_Weights
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(weights=FasterRCNN_ResNet50_FPN_Weights.DEFAULT)
# download test images into demo_data folder
download_from_url('https://raw.githubusercontent.com/obss/sahi/main/demo/demo_data/small-vehicles1.jpeg', 'demo_data/small-vehicles1.jpeg')
download_from_url('https://raw.githubusercontent.com/obss/sahi/main/demo/demo_data/terrain2.png', 'demo_data/terrain2.png')
# set torchvision FasterRCNN model
import torchvision
from torchvision.models.detection import FasterRCNN_ResNet50_FPN_Weights
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(weights=FasterRCNN_ResNet50_FPN_Weights.DEFAULT)
# download test images into demo_data folder
download_from_url('https://raw.githubusercontent.com/obss/sahi/main/demo/demo_data/small-vehicles1.jpeg', 'demo_data/small-vehicles1.jpeg')
download_from_url('https://raw.githubusercontent.com/obss/sahi/main/demo/demo_data/terrain2.png', 'demo_data/terrain2.png')
1. Standard Inference with a Torchvision Model¶
- Instantiate a torchvision model by defining model weight path, config path and other parameters:
In [9]:
Copied!
detection_model = AutoDetectionModel.from_pretrained(
model_type='torchvision',
model=model,
confidence_threshold=0.5,
image_size=640,
device="cpu", # or "cuda:0"
load_at_init=True,
)
detection_model = AutoDetectionModel.from_pretrained(
model_type='torchvision',
model=model,
confidence_threshold=0.5,
image_size=640,
device="cpu", # or "cuda:0"
load_at_init=True,
)
- Perform prediction by feeding the
get_prediction
function with an image path and a DetectionModel instance:
In [10]:
Copied!
result = get_prediction("demo_data/small-vehicles1.jpeg", detection_model)
result = get_prediction("demo_data/small-vehicles1.jpeg", detection_model)
- Or perform prediction by feeding the
get_prediction
function with a numpy image and a DetectionModel instance:
In [5]:
Copied!
result = get_prediction(read_image("demo_data/small-vehicles1.jpeg"), detection_model)
result = get_prediction(read_image("demo_data/small-vehicles1.jpeg"), detection_model)
- Visualize predicted bounding boxes and masks over the original image:
In [11]:
Copied!
result.export_visuals(export_dir="demo_data/")
Image("demo_data/prediction_visual.png")
result.export_visuals(export_dir="demo_data/")
Image("demo_data/prediction_visual.png")
Out[11]:
2. Sliced Inference with a TorchVision Model¶
- To perform sliced prediction we need to specify slice parameters. In this example we will perform prediction over slices of 256x256 with an overlap ratio of 0.2:
In [7]:
Copied!
result = get_sliced_prediction(
"demo_data/small-vehicles1.jpeg",
detection_model,
slice_height = 320,
slice_width = 320,
overlap_height_ratio = 0.2,
overlap_width_ratio = 0.2,
)
result = get_sliced_prediction(
"demo_data/small-vehicles1.jpeg",
detection_model,
slice_height = 320,
slice_width = 320,
overlap_height_ratio = 0.2,
overlap_width_ratio = 0.2,
)
Performing prediction on 12 number of slices.
- Visualize predicted bounding boxes and masks over the original image:
In [8]:
Copied!
result.export_visuals(export_dir="demo_data/")
Image("demo_data/prediction_visual.png")
result.export_visuals(export_dir="demo_data/")
Image("demo_data/prediction_visual.png")
Out[8]:
3. Prediction Result¶
- Predictions are returned as sahi.prediction.PredictionResult, you can access the object prediction list as:
In [9]:
Copied!
object_prediction_list = result.object_prediction_list
object_prediction_list = result.object_prediction_list
In [10]:
Copied!
object_prediction_list[0]
object_prediction_list[0]
Out[10]:
ObjectPrediction< bbox: BoundingBox: <(319, 317, 383, 365), w: 64, h: 48>, mask: None, score: PredictionScore: <value: 0.9990589022636414>, category: Category: <id: 3, name: car>>
In [11]:
Copied!
result.to_coco_annotations()[:3]
result.to_coco_annotations()[:3]
Out[11]:
[{'image_id': None, 'bbox': [319, 317, 64, 48], 'score': 0.9990589022636414, 'category_id': 3, 'category_name': 'car', 'segmentation': [], 'iscrowd': 0, 'area': 3072}, {'image_id': None, 'bbox': [448, 305, 47, 39], 'score': 0.9988724589347839, 'category_id': 3, 'category_name': 'car', 'segmentation': [], 'iscrowd': 0, 'area': 1833}, {'image_id': None, 'bbox': [762, 252, 32, 32], 'score': 0.996906578540802, 'category_id': 3, 'category_name': 'car', 'segmentation': [], 'iscrowd': 0, 'area': 1024}]
In [12]:
Copied!
result.to_coco_predictions(image_id=1)[:3]
result.to_coco_predictions(image_id=1)[:3]
Out[12]:
[{'image_id': 1, 'bbox': [319, 317, 64, 48], 'score': 0.9990589022636414, 'category_id': 3, 'category_name': 'car', 'segmentation': [], 'iscrowd': 0, 'area': 3072}, {'image_id': 1, 'bbox': [448, 305, 47, 39], 'score': 0.9988724589347839, 'category_id': 3, 'category_name': 'car', 'segmentation': [], 'iscrowd': 0, 'area': 1833}, {'image_id': 1, 'bbox': [762, 252, 32, 32], 'score': 0.996906578540802, 'category_id': 3, 'category_name': 'car', 'segmentation': [], 'iscrowd': 0, 'area': 1024}]
- ObjectPrediction's can be converted to imantics annotation format:
In [ ]:
Copied!
!pip install -U imantics
!pip install -U imantics
In [13]:
Copied!
result.to_imantics_annotations()[:3]
result.to_imantics_annotations()[:3]
Out[13]:
[<imantics.annotation.Annotation at 0x7f81f7545e50>, <imantics.annotation.Annotation at 0x7f81ef156b50>, <imantics.annotation.Annotation at 0x7f81ef1614c0>]
4. Batch Prediction¶
- Set model and directory parameters:
In [15]:
Copied!
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
detection_model = AutoDetectionModel.from_pretrained(
model_type='torchvision',
model=model,
confidence_threshold=0.4,
image_size=640,
device="cpu", # or "cuda:0"
load_at_init=True,
)
slice_height = 256
slice_width = 256
overlap_height_ratio = 0.2
overlap_width_ratio = 0.2
source_image_dir = "demo_data/"
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
detection_model = AutoDetectionModel.from_pretrained(
model_type='torchvision',
model=model,
confidence_threshold=0.4,
image_size=640,
device="cpu", # or "cuda:0"
load_at_init=True,
)
slice_height = 256
slice_width = 256
overlap_height_ratio = 0.2
overlap_width_ratio = 0.2
source_image_dir = "demo_data/"
- Perform sliced inference on given folder:
In [16]:
Copied!
predict(
detection_model=detection_model,
source=source_image_dir,
slice_height=slice_height,
slice_width=slice_width,
overlap_height_ratio=overlap_height_ratio,
overlap_width_ratio=overlap_width_ratio,
)
predict(
detection_model=detection_model,
source=source_image_dir,
slice_height=slice_height,
slice_width=slice_width,
overlap_height_ratio=overlap_height_ratio,
overlap_width_ratio=overlap_width_ratio,
)
There are 3 listed files in folder: demo_data/
Performing inference on images: 0%| | 0/3 [00:00<?, ?it/s]
Performing prediction on 20 number of slices.
Performing inference on images: 33%|███▎ | 1/3 [00:01<00:03, 1.98s/it]
Prediction time is: 1932.06 ms Performing prediction on 15 number of slices.
Performing inference on images: 67%|██████▋ | 2/3 [00:03<00:01, 1.47s/it]
Prediction time is: 1055.54 ms Performing prediction on 15 number of slices.
Performing inference on images: 100%|██████████| 3/3 [00:04<00:00, 1.41s/it]
Prediction time is: 1078.43 ms Prediction results are successfully exported to runs/predict/exp13