The main classes defined in this module are ImageDataLoaders and SegmentationDataLoaders, so you probably want to jump to their definitions. They provide factory methods that are a great way to quickly get your data ready for training, see the vision tutorial for examples.
This is used by the type-dispatched versions of show_batch and show_results for the vision application. By default, there will be int(math.sqrt(n)) rows and ceil(n/rows) columns. double will double the number of columns and n. The default figsize is (cols*imsize, rows*imsize+add_vert). If a title is passed it is set to the figure. sharex, sharey, squeeze, subplot_kw and gridspec_kw are all passed down to plt.subplots. If return_fig is True, returns fig,axs, otherwise just axs.
bb = tensor([[-2,-0.5,0.5,1.5], [-0.5,-0.5,0.5,0.5], [1,0.5,0.5,0.75], [-0.5,-0.5,0.5,0.5]])
bb,lbl = clip_remove_empty(bb, tensor([1,2,3,2]))
test_eq(bb, tensor([[-1,-0.5,0.5,1.], [-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5]]))
test_eq(lbl, tensor([1,2,2]))
img1,img2 = TensorImage(torch.randn(16,16,3)),TensorImage(torch.randn(16,16,3))
bb1 = tensor([[-2,-0.5,0.5,1.5], [-0.5,-0.5,0.5,0.5], [1,0.5,0.5,0.75], [-0.5,-0.5,0.5,0.5]])
lbl1 = tensor([1, 2, 3, 2])
bb2 = tensor([[-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5]])
lbl2 = tensor([2, 2])
samples = [(img1, bb1, lbl1), (img2, bb2, lbl2)]
res = bb_pad(samples)
non_empty = tensor([True,True,False,True])
test_eq(res[0][0], img1)
test_eq(res[0][1], tensor([[-1,-0.5,0.5,1.], [-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5]]))
test_eq(res[0][2], tensor([1,2,2]))
test_eq(res[1][0], img2)
test_eq(res[1][1], tensor([[-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5], [0,0,0,0]]))
test_eq(res[1][2], tensor([2,2,0]))
TransformBlocks for vision
These are the blocks the vision application provide for the data block API.
If add_na is True, a new category is added for NaN (that will represent the background class).
This class should not be used directly, one of the factory methods should be preferred instead. All those factory methods accept as arguments:
item_tfms: one or several transforms applied to the items before batching thembatch_tfms: one or several transforms applied to the batches once they are formedbs: the batch sizeval_bs: the batch size for the validationDataLoader(defaults tobs)shuffle_train: if we shuffle the trainingDataLoaderor notdevice: the PyTorch device to use (defaults todefault_device())
If valid_pct is provided, a random split is performed (with an optional seed) by setting aside that percentage of the data for the validation set (instead of looking at the grandparents folder). If a vocab is passed, only the folders with names in vocab are kept.
Here is an example loading a subsample of MNIST:
path = untar_data(URLs.MNIST_TINY)
dls = ImageDataLoaders.from_folder(path)
Passing valid_pct will ignore the valid/train folders and do a new random split:
dls = ImageDataLoaders.from_folder(path, valid_pct=0.2)
dls.valid_ds.items[:3]
The validation set is a random subset of valid_pct, optionally created with seed for reproducibility.
Here is how to create the same DataLoaders on the MNIST dataset as the previous example with a label_func:
fnames = get_image_files(path)
def label_func(x): return x.parent.name
dls = ImageDataLoaders.from_path_func(path, fnames, label_func)
Here is another example on the pets dataset. Here filenames are all in an "images" folder and their names have the form class_name_123.jpg. One way to properly label them is thus to throw away everything after the last _:
The validation set is a random subset of valid_pct, optionally created with seed for reproducibility.
Here is how to create the same DataLoaders on the MNIST dataset as the previous example (you will need to change the initial two / by a \ on Windows):
pat = r'/([^/]*)/\d+.png$'
dls = ImageDataLoaders.from_path_re(path, fnames, pat)
The validation set is a random subset of valid_pct, optionally created with seed for reproducibility. This method does the same as ImageDataLoaders.from_path_func except label_func is applied to the name of each filenames, and not the full path.
The validation set is a random subset of valid_pct, optionally created with seed for reproducibility. This method does the same as ImageDataLoaders.from_path_re except pat is applied to the name of each filenames, and not the full path.
The validation set is a random subset of valid_pct, optionally created with seed for reproducibility. Alternatively, if your df contains a valid_col, give its name or its index to that argument (the column should have True for the elements going to the validation set).
You can add an additional folder to the filenames in df if they should not be concatenated directly to path. If they do not contain the proper extensions, you can add suff. If your label column contains multiple labels on each row, you can use label_delim to warn the library you have a multi-label problem.
y_block should be passed when the task automatically picked by the library is wrong, you should then give CategoryBlock, MultiCategoryBlock or RegressionBlock. For more advanced uses, you should use the data block API.
The tiny mnist example from before also contains a version in a dataframe:
path = untar_data(URLs.MNIST_TINY)
df = pd.read_csv(path/'labels.csv')
df.head()
Here is how to load it using ImageDataLoaders.from_df:
dls = ImageDataLoaders.from_df(df, path)
Here is another example with a multi-label problem:
path = untar_data(URLs.PASCAL_2007)
df = pd.read_csv(path/'train.csv')
df.head()
dls = ImageDataLoaders.from_df(df, path, folder='train', valid_col='is_valid')
Note that can also pass 2 to valid_col (the index, starting with 0).
Same as ImageDataLoaders.from_df after loading the file with header and delimiter.
Here is how to load the same dataset as before with this method:
dls = ImageDataLoaders.from_csv(path, 'train.csv', folder='train', valid_col='is_valid')
The validation set is a random subset of valid_pct, optionally created with seed for reproducibility. y_block can be passed to specify the type of the targets.
path = untar_data(URLs.PETS)
fnames = get_image_files(path/"images")
labels = ['_'.join(x.name.split('_')[:-1]) for x in fnames]
dls = ImageDataLoaders.from_lists(path, fnames, labels)
The validation set is a random subset of valid_pct, optionally created with seed for reproducibility. codes contain the mapping index to label.
path = untar_data(URLs.CAMVID_TINY)
fnames = get_image_files(path/'images')
def label_func(x): return path/'labels'/f'{x.stem}_P{x.suffix}'
codes = np.loadtxt(path/'codes.txt', dtype=str)
dls = SegmentationDataLoaders.from_label_func(path, fnames, label_func, codes=codes)