Models

RCNN

class keras_rcnn.models.RCNN(input_shape, categories, anchor_aspect_ratios=None, anchor_base_size=16, anchor_padding=1, anchor_scales=None, anchor_stride=16, backbone=None, dense_units=1024, mask_shape=(28, 28), maximum_proposals=300, minimum_size=16)[source]
A Region-based Convolutional Neural Network (RCNN)
Parameters:
input_shape : A shape tuple (integer) without the batch dimension.

For example:

input_shape=(224, 224, 3)

specifies that the input are batches of $224 × 224$ RGB images.

Likewise:

input_shape=(224, 224)

specifies that the input are batches of $224 × 224$ grayscale images.

categories : An array-like with shape:

$$(categories,)$$.

For example:

categories=[“circle”, “square”, “triangle”]

specifies that the detected objects belong to either the “circle,” “square,” or “triangle” category.

anchor_aspect_ratios : An array-like with shape:

$$(aspect_ratios,)$$

used to generate anchors.

For example:

aspect_ratios=[0.5, 1., 2.]

corresponds to 1:2, 1:1, and 2:1 respectively.

anchor_base_size : Integer that specifies an anchor’s base area:

$$base_area = base_size^{2}$$.

anchor_scales : An array-like with shape:

$$(scales,)$$

used to generate anchors. A scale corresponds to:

$$area_{scale}=sqrt{

rac{area_{anchor}}{area_{base}}}$$.

anchor_stride : A positive integer

backbone :

dense_units : A positive integer that specifies the dimensionality of

the fully-connected layers.

The fully-connected layers are the layers that precede the fully-connected layers for the classification, regression and segmentation target functions.

Increasing the number of dense units will increase the expressiveness of the network and consequently the ability to correctly learn the target functions, but it’ll substantially increase the number of learnable parameters and memory needed by the model.

mask_shape : A shape tuple (integer).

maximum_proposals : A positive integer that specifies the maximum

number of object proposals returned from the model.

The model always return an array-like with shape:

$$(maximum_proposals, 4)$$

regardless of the number of object proposals returned after non-maximum suppression is performed. If the number of object proposals returned from non-maximum suppression is less than the number of objects specified by the maximum_proposals parameter, the model will return bounding boxes with the value:

[0., 0., 0., 0.]

and scores with the value [0.].

minimum_size : A positive integer that specifies the maximum width

or height for each object proposal.

Attributes:
built
input

Retrieves the input tensor(s) of a layer.

input_mask

Retrieves the input mask tensor(s) of a layer.

input_shape

Retrieves the input shape tuple(s) of a layer.

input_spec

Gets the model’s input specs.

layers
losses

Retrieves the model’s losses.

non_trainable_weights
output

Retrieves the output tensor(s) of a layer.

output_mask

Retrieves the output mask tensor(s) of a layer.

output_shape

Retrieves the output shape tuple(s) of a layer.

state_updates

Returns the updates from all layers that are stateful.

stateful
trainable_weights
updates

Retrieves the model’s updates.

uses_learning_phase
weights

Methods

__call__(self, inputs, \*\*kwargs) Wrapper around self.call(), for handling internal references.
add_loss(self, losses[, inputs]) Adds losses to the layer.
add_update(self, updates[, inputs]) Adds updates to the layer.
add_weight(self, name, shape[, dtype, …]) Adds a weight variable to the layer.
assert_input_compatibility(self, inputs) Checks compatibility between the layer and provided inputs.
build(self, input_shape) Creates the layer weights.
call(self, inputs[, mask]) Calls the model on new inputs.
compile(self, optimizer, \*\*kwargs) Configures the model for training.
compute_mask(self, inputs, mask) Computes an output mask tensor.
compute_output_shape(self, input_shape) Computes the output shape of the layer.
count_params(self) Counts the total number of scalars composing the weights.
evaluate(self[, x, y, batch_size, verbose, …]) Returns the loss value & metrics values for the model in test mode.
evaluate_generator(self, generator[, steps, …]) Evaluates the model on a data generator.
fit(self[, x, y, batch_size, epochs, …]) Trains the model for a given number of epochs (iterations on a dataset).
fit_generator(self, generator[, …]) Trains the model on data generated batch-by-batch by a Python generator (or an instance of Sequence).
from_config(config[, custom_objects]) Instantiates a Model from its config (output of get_config()).
get_config(self) Returns the config of the layer.
get_input_at(self, node_index) Retrieves the input tensor(s) of a layer at a given node.
get_input_mask_at(self, node_index) Retrieves the input mask tensor(s) of a layer at a given node.
get_input_shape_at(self, node_index) Retrieves the input shape(s) of a layer at a given node.
get_layer(self[, name, index]) Retrieves a layer based on either its name (unique) or index.
get_output_at(self, node_index) Retrieves the output tensor(s) of a layer at a given node.
get_output_mask_at(self, node_index) Retrieves the output mask tensor(s) of a layer at a given node.
get_output_shape_at(self, node_index) Retrieves the output shape(s) of a layer at a given node.
get_weights(self) Retrieves the weights of the model.
load_weights(self, filepath[, by_name, …]) Loads all layer weights from a HDF5 save file.
predict(self, x[, batch_size, verbose, steps]) Generates output predictions for the input samples.
predict_generator(self, generator[, steps, …]) Generates predictions for the input samples from a data generator.
predict_on_batch(self, x) Returns predictions for a single batch of samples.
run_internal_graph(self, inputs[, masks]) Computes output tensors for new inputs.
save(self, filepath[, overwrite, …]) Saves the model to a single HDF5 file.
save_weights(self, filepath[, overwrite]) Dumps all layer weights to a HDF5 file.
set_weights(self, weights) Sets the weights of the model.
summary(self[, line_length, positions, print_fn]) Prints a string summary of the network.
test_on_batch(self, x, y[, sample_weight]) Test the model on a single batch of samples.
to_json(self, \*\*kwargs) Returns a JSON string containing the network configuration.
to_yaml(self, \*\*kwargs) Returns a yaml string containing the network configuration.
train_on_batch(self, x, y[, sample_weight, …]) Runs a single gradient update on a single batch of data.
get_losses_for  
get_updates_for  
reset_states