Function Reference
Keras Models |
|
---|---|
Keras Model |
|
Keras Model composed of a linear stack of layers |
|
Create a Keras custom model |
|
Replicates a model on different GPUs. |
|
Print a summary of a Keras model |
|
Configure a Keras model for training |
|
Evaluate a Keras model |
|
Export a Saved Model |
|
Train a Keras model |
|
Fits the model on data yielded batch-by-batch by a generator. |
|
Evaluates the model on a data generator. |
|
Generate predictions from a Keras model |
|
Generates probability or class probability predictions for the input samples. |
|
Returns predictions for a single batch of samples. |
|
Generates predictions for the input samples from a data generator. |
|
Single gradient update or model evaluation over one batch of samples. |
|
Retrieves a layer based on either its name (unique) or index. |
|
Remove the last layer in a model |
|
Save/Load models using HDF5 files |
|
Serialize a model to an R object |
|
Clone a model instance. |
|
Freeze and unfreeze weights |
|
Core Layers |
|
Input layer |
|
Add a densely-connected NN layer to an output |
|
Apply an activation function to an output. |
|
Applies Dropout to the input. |
|
Reshapes an output to a certain shape. |
|
Permute the dimensions of an input according to a given pattern |
|
Repeats the input n times. |
|
Wraps arbitrary expression as a layer |
|
Layer that applies an update to the cost function based input activity. |
|
Masks a sequence by using a mask value to skip timesteps. |
|
Flattens an input |
|
Convolutional Layers |
|
1D convolution layer (e.g. temporal convolution). |
|
Transposed 1D convolution layer (sometimes called Deconvolution). |
|
2D convolution layer (e.g. spatial convolution over images). |
|
Transposed 2D convolution layer (sometimes called Deconvolution). |
|
3D convolution layer (e.g. spatial convolution over volumes). |
|
Transposed 3D convolution layer (sometimes called Deconvolution). |
|
Convolutional LSTM. |
|
Depthwise separable 1D convolution. |
|
Separable 2D convolution. |
|
Depthwise separable 2D convolution. |
|
Upsampling layer for 1D inputs. |
|
Upsampling layer for 2D inputs. |
|
Upsampling layer for 3D inputs. |
|
Zero-padding layer for 1D input (e.g. temporal sequence). |
|
Zero-padding layer for 2D input (e.g. picture). |
|
Zero-padding layer for 3D data (spatial or spatio-temporal). |
|
Cropping layer for 1D input (e.g. temporal sequence). |
|
Cropping layer for 2D input (e.g. picture). |
|
Cropping layer for 3D data (e.g. spatial or spatio-temporal). |
|
Pooling Layers |
|
Max pooling operation for temporal data. |
|
Max pooling operation for spatial data. |
|
Max pooling operation for 3D data (spatial or spatio-temporal). |
|
Average pooling for temporal data. |
|
Average pooling operation for spatial data. |
|
Average pooling operation for 3D data (spatial or spatio-temporal). |
|
Global max pooling operation for temporal data. |
|
Global average pooling operation for temporal data. |
|
Global max pooling operation for spatial data. |
|
Global average pooling operation for spatial data. |
|
Global Max pooling operation for 3D data. |
|
Global Average pooling operation for 3D data. |
|
Activation Layers |
|
Apply an activation function to an output. |
|
Rectified Linear Unit activation function |
|
Leaky version of a Rectified Linear Unit. |
|
Parametric Rectified Linear Unit. |
|
Thresholded Rectified Linear Unit. |
|
Exponential Linear Unit. |
|
Softmax activation function. |
|
Dropout Layers |
|
Applies Dropout to the input. |
|
Spatial 1D version of Dropout. |
|
Spatial 2D version of Dropout. |
|
Spatial 3D version of Dropout. |
|
Locally-connected Layers |
|
Locally-connected layer for 1D inputs. |
|
Locally-connected layer for 2D inputs. |
|
Recurrent Layers |
|
Fully-connected RNN where the output is to be fed back to input. |
|
Gated Recurrent Unit - Cho et al. |
|
Fast GRU implementation backed by CuDNN. |
|
Long Short-Term Memory unit - Hochreiter 1997. |
|
Fast LSTM implementation backed by CuDNN. |
|
Embedding Layers |
|
Turns positive integers (indexes) into dense vectors of fixed size. |
|
Normalization Layers |
|
Batch normalization layer (Ioffe and Szegedy, 2014). |
|
Noise Layers |
|
Apply additive zero-centered Gaussian noise. |
|
Apply multiplicative 1-centered Gaussian noise. |
|
Applies Alpha Dropout to the input. |
|
Merge Layers |
|
Layer that adds a list of inputs. |
|
Layer that subtracts two inputs. |
|
Layer that multiplies (element-wise) a list of inputs. |
|
Layer that averages a list of inputs. |
|
Layer that computes the maximum (element-wise) a list of inputs. |
|
Layer that computes the minimum (element-wise) a list of inputs. |
|
Layer that concatenates a list of inputs. |
|
Layer that computes a dot product between samples in two tensors. |
|
Layer Wrappers |
|
Apply a layer to every temporal slice of an input. |
|
Bidirectional wrapper for RNNs. |
|
Layer Methods |
|
Layer/Model configuration |
|
Layer/Model weights as R arrays |
|
|
Retrieve tensors for layers with multiple nodes |
Count the total number of scalars composing the weights. |
|
Reset the states for a layer |
|
Custom Layers |
|
Base R6 class for Keras layers |
|
Create a Keras Layer |
|
Model Persistence |
|
Save/Load models using HDF5 files |
|
Save/Load model weights using HDF5 files |
|
Serialize a model to an R object |
|
Layer/Model weights as R arrays |
|
Layer/Model configuration |
|
Model configuration as JSON |
|
Model configuration as YAML |
|
Datasets |
|
CIFAR10 small image classification |
|
CIFAR100 small image classification |
|
IMDB Movie reviews sentiment classification |
|
Reuters newswire topics classification |
|
MNIST database of handwritten digits |
|
Fashion-MNIST database of fashion articles |
|
Boston housing price regression dataset |
|
Applications |
|
Xception V1 model for Keras. |
|
Inception V3 model, with weights pre-trained on ImageNet. |
|
|
Inception-ResNet v2 model, with weights trained on ImageNet |
VGG16 and VGG19 models for Keras. |
|
ResNet50 model for Keras. |
|
|
MobileNet model architecture. |
|
MobileNetV2 model architecture |
|
Instantiates the DenseNet architecture. |
|
Instantiates a NASNet model. |
Preprocesses a tensor or array encoding a batch of images. |
|
Decodes the prediction of an ImageNet model. |
|
Sequence Preprocessing |
|
Pads sequences to the same length |
|
Generates skipgram word pairs. |
|
Generates a word rank-based probabilistic sampling table. |
|
Text Preprocessing |
|
Text tokenization utility |
|
Update tokenizer internal vocabulary based on a list of texts or list of sequences. |
|
Save a text tokenizer to an external file |
|
Transform each text in texts in a sequence of integers. |
|
Transforms each text in texts in a sequence of integers. |
|
Convert a list of texts to a matrix. |
|
Convert a list of sequences into a matrix. |
|
One-hot encode a text into a list of word indexes in a vocabulary of size n. |
|
Converts a text to a sequence of indexes in a fixed-size hashing space. |
|
Convert text to a sequence of words (or tokens). |
|
Image Preprocessing |
|
Loads an image into PIL format. |
|
3D array representation of images |
|
Generate batches of image data with real-time data augmentation. The data will be looped over (in batches). |
|
Fit image data generator internal statistics to some sample data. |
|
Generates batches of augmented/normalized data from image data and labels |
|
Generates batches of data from images in a directory (with optional augmented/normalized data) |
|
Retrieve the next item from a generator |
|
Optimizers |
|
Stochastic gradient descent optimizer |
|
RMSProp optimizer |
|
Adagrad optimizer. |
|
Adadelta optimizer. |
|
Adam optimizer |
|
Adamax optimizer |
|
Nesterov Adam optimizer |
|
Callbacks |
|
Callback that prints metrics to stdout. |
|
Save the model after every epoch. |
|
Stop training when a monitored quantity has stopped improving. |
|
Callback used to stream events to a server. |
|
Learning rate scheduler. |
|
TensorBoard basic visualizations |
|
Reduce learning rate when a metric has stopped improving. |
|
Callback that terminates training when a NaN loss is encountered. |
|
Callback that streams epoch results to a csv file |
|
Create a custom callback |
|
Base R6 class for Keras callbacks |
|
Initializers |
|
Initializer that generates tensors initialized to 0. |
|
Initializer that generates tensors initialized to 1. |
|
Initializer that generates tensors initialized to a constant value. |
|
Initializer that generates tensors with a normal distribution. |
|
Initializer that generates tensors with a uniform distribution. |
|
Initializer that generates a truncated normal distribution. |
|
Initializer capable of adapting its scale to the shape of weights. |
|
Initializer that generates a random orthogonal matrix. |
|
Initializer that generates the identity matrix. |
|
Glorot normal initializer, also called Xavier normal initializer. |
|
Glorot uniform initializer, also called Xavier uniform initializer. |
|
He normal initializer. |
|
He uniform variance scaling initializer. |
|
LeCun uniform initializer. |
|
LeCun normal initializer. |
|
Constraints |
|
|
Weight constraints |
Base R6 class for Keras constraints |
|
Utils |
|
Plot training history |
|
Utility function for generating batches of temporal data. |
|
Converts a class vector (integers) to binary class matrix. |
|
Normalize a matrix or nd-array |
|
Provide a scope with mappings of names to custom objects |
|
Keras array object |
|
Representation of HDF5 dataset to be used instead of an R array |
|
Downloads a file from a URL if it not already in the cache. |
|
Objects exported from other packages |
|
Install Keras and the TensorFlow backend |
|
Check if Keras is Available |
|
Keras backend tensor engine |
|
Keras implementation |
|
Select a Keras implementation and backend |
|
Losses |
|
|
Model loss functions |
Metrics |
|
|
Model performance metrics |
Regularizers |
|
L1 and L2 regularization |
|
Activations |
|
|
Activation functions |
Backend |
|
Element-wise absolute value. |
|
Bitwise reduction (logical AND). |
|
Bitwise reduction (logical OR). |
|
Creates a 1D tensor containing a sequence of integers. |
|
Returns the index of the maximum value along an axis. |
|
Returns the index of the minimum value along an axis. |
|
Active Keras backend |
|
Batchwise dot product. |
|
Turn a nD tensor into a 2D tensor with same 1st dimension. |
|
Returns the value of more than one tensor variable. |
|
Applies batch normalization on x given mean, var, beta and gamma. |
|
Sets the values of many tensor variables at once. |
|
Adds a bias vector to a tensor. |
|
Binary crossentropy between an output tensor and a target tensor. |
|
Casts a tensor to a different dtype and returns it. |
|
Cast an array to the default Keras float type. |
|
Categorical crossentropy between an output tensor and a target tensor. |
|
Destroys the current TF graph and creates a new one. |
|
Element-wise value clipping. |
|
Concatenates a list of tensors alongside the specified axis. |
|
Creates a constant tensor. |
|
1D convolution. |
|
2D convolution. |
|
2D deconvolution (i.e. transposed convolution). |
|
3D convolution. |
|
3D deconvolution (i.e. transposed convolution). |
|
Computes cos of x element-wise. |
|
Returns the static number of elements in a Keras variable or tensor. |
|
Runs CTC loss algorithm on each batch element. |
|
Decodes the output of a softmax. |
|
Converts CTC labels from dense to sparse. |
|
Cumulative product of the values in a tensor, alongside the specified axis. |
|
Cumulative sum of the values in a tensor, alongside the specified axis. |
|
Depthwise 2D convolution with separable filters. |
|
Multiplies 2 tensors (and/or variables) and returns a tensor. |
|
Sets entries in |
|
Returns the dtype of a Keras tensor or variable, as a string. |
|
Exponential linear unit. |
|
Fuzz factor used in numeric expressions. |
|
Element-wise equality between two tensors. |
|
Evaluates the value of a variable. |
|
Element-wise exponential. |
|
Adds a 1-sized dimension at index |
|
Instantiate an identity matrix and returns it. |
|
Flatten a tensor. |
|
Default float type |
|
Reduce elems using fn to combine them from left to right. |
|
Reduce elems using fn to combine them from right to left. |
|
Instantiates a Keras function |
|
Retrieves the elements of indices |
|
TF session to be used by the backend. |
|
Get the uid for the default graph. |
|
Returns the value of a variable. |
|
Returns the shape of a variable. |
|
Returns the gradients of |
|
Element-wise truth value of (x > y). |
|
Element-wise truth value of (x >= y). |
|
Segment-wise linear approximation of sigmoid. |
|
Returns a tensor with the same content as the input tensor. |
|
Default image data format convention ('channels_first' or 'channels_last'). |
|
Selects |
|
Returns whether the |
|
Selects |
|
Returns the shape of tensor or variable as a list of int or NULL entries. |
|
Returns whether |
|
Returns whether |
|
Returns whether a tensor is a sparse tensor. |
|
Returns whether |
|
Normalizes a tensor wrt the L2 norm alongside the specified axis. |
|
Returns the learning phase flag. |
|
Element-wise truth value of (x < y). |
|
Element-wise truth value of (x <= y). |
|
Apply 1D conv with un-shared weights. |
|
Apply 2D conv with un-shared weights. |
|
Element-wise log. |
|
Computes log(sum(exp(elements across dimensions of a tensor))). |
|
Sets the manual variable initialization flag. |
|
Map the function fn over the elements elems and return the outputs. |
|
Maximum value in a tensor. |
|
Element-wise maximum of two tensors. |
|
Mean of a tensor, alongside the specified axis. |
|
Minimum value in a tensor. |
|
Element-wise minimum of two tensors. |
|
Compute the moving average of a variable. |
|
Returns the number of axes in a tensor, as an integer. |
|
Computes mean and std for batch then apply batch_normalization on batch. |
|
Element-wise inequality between two tensors. |
|
Computes the one-hot representation of an integer tensor. |
|
Instantiates an all-ones tensor variable and returns it. |
|
Instantiates an all-ones variable of the same shape as another tensor. |
|
Permutes axes in a tensor. |
|
Instantiates a placeholder tensor and returns it. |
|
2D Pooling. |
|
3D Pooling. |
|
Element-wise exponentiation. |
|
Prints |
|
Multiplies the values in a tensor, alongside the specified axis. |
|
Returns a tensor with random binomial distribution of values. |
|
Returns a tensor with normal distribution of values. |
|
Instantiates a variable with values drawn from a normal distribution. |
|
Returns a tensor with uniform distribution of values. |
|
Instantiates a variable with values drawn from a uniform distribution. |
|
Rectified linear unit. |
|
Repeats a 2D tensor. |
|
Repeats the elements of a tensor along an axis. |
|
Reset graph identifiers. |
|
Reshapes a tensor to the specified shape. |
|
Resizes the images contained in a 4D tensor. |
|
Resizes the volume contained in a 5D tensor. |
|
Reverse a tensor along the specified axes. |
|
Iterates over the time dimension of a tensor |
|
Element-wise rounding to the closest integer. |
|
2D convolution with separable filters. |
|
Sets the learning phase to a fixed value. |
|
Sets the value of a variable, from an R array. |
|
Returns the symbolic shape of a tensor or variable. |
|
Element-wise sigmoid. |
|
Element-wise sign. |
|
Computes sin of x element-wise. |
|
Softmax of a tensor. |
|
Softplus of a tensor. |
|
Softsign of a tensor. |
|
Categorical crossentropy with integer targets. |
|
Pads the 2nd and 3rd dimensions of a 4D tensor. |
|
Pads 5D tensor with zeros along the depth, height, width dimensions. |
|
Element-wise square root. |
|
Element-wise square. |
|
Removes a 1-dimension from the tensor at index |
|
Stacks a list of rank |
|
Standard deviation of a tensor, alongside the specified axis. |
|
Returns |
|
Sum of the values in a tensor, alongside the specified axis. |
|
Switches between two operations depending on a scalar value. |
|
Element-wise tanh. |
|
Pads the middle dimension of a 3D tensor. |
|
Creates a tensor by tiling |
|
Converts a sparse tensor into a dense tensor and returns it. |
|
Transposes a tensor and returns it. |
|
Returns a tensor with truncated random normal distribution of values. |
|
Update the value of |
|
Update the value of |
|
Update the value of |
|
Variance of a tensor, alongside the specified axis. |
|
Instantiates a variable and returns it. |
|
Instantiates an all-zeros variable and returns it. |
|
Instantiates an all-zeros variable of the same shape as another tensor. |