|
mlpack
3.4.2
|
| If value == true, then VecType is some sort of Armadillo vector or subview | |
| The AdaBoost class | |
| The model to save to disk | |
| This class implements AMF (alternating matrix factorization) on the given matrix V | |
| This initialization rule initializes matrix W and H to root of the average of V, perturbed with uniform noise | |
| This class acts as a wrapper for basic termination policies to be used by SVDCompleteIncrementalLearning | |
| This initialization rule for AMF simply fills the W and H matrices with the matrices given to the constructor of this object | |
| This class acts as a wrapper for basic termination policies to be used by SVDIncompleteIncrementalLearning | |
| This termination policy only terminates when the maximum number of iterations has been reached | |
| This initialization rule for AMF simply takes in two initialization rules, and initialize W with the first rule and H with the second rule | |
| This class implements a method titled 'Alternating Least Squares' described in the following paper: | |
| The multiplicative distance update rules for matrices W and H | |
| This follows a method described in the paper 'Algorithms for Non-negative | |
| This class initializes the W matrix of the AMF algorithm by averaging p randomly chosen columns of V | |
| This initialization rule for AMF simply fills the W and H matrices with uniform random noise in [0, 1] | |
| This class implements a simple residue-based termination policy | |
| This class implements residue tolerance termination policy | |
| This class implements SVD batch learning with momentum | |
| This class computes SVD using complete incremental batch learning, as described in the following paper: | |
| TODO : Merge this template specialized function for sparse matrix using common row_col_iterator | |
| This class computes SVD using incomplete incremental batch learning, as described in the following paper: | |
| This class implements validation termination policy based on RMSE index | |
| Implementation of the AdaptiveMaxPooling layer | |
| Implementation of the AdaptiveMeanPooling | |
| Implementation of the Add module class | |
| Implementation of the AddMerge module class | |
| The alpha - dropout layer is a regularizer that randomly with probability 'ratio' sets input values to alphaDash | |
| Implementation of the Atrous Convolution class | |
| Generator of instances of the binary addition task | |
| Generator of instances of the binary sequence copy task | |
| Generator of instances of the sequence sort task | |
| Implementation of the base layer | |
| Declaration of the Batch Normalization layer class | |
| Multiple independent Bernoulli distributions | |
| Definition and Implementation of the Bilinear Interpolation Layer | |
| For more information, see the following paper: | |
| Implementation of a standard bidirectional recurrent neural network container | |
| The CELU activation function, defined by | |
| Implementation of the Concat class | |
| Implementation of the Concatenate module class | |
| Implementation of the concat performance class | |
| Implementation of the constant layer | |
| This class is used to initialize weight matrix with constant values | |
| Implementation of the Convolution class | |
| Cosine Embedding Loss function is used for measuring whether two inputs are similar or dissimilar, using the cosine distance, and is typically used for learning nonlinear embeddings or semi-supervised learning | |
| A concatenated ReLU has two outputs, one ReLU and one negative ReLU, concatenated together | |
| The cross-entropy performance function measures the network's performance according to the cross-entropy between the input and target distributions | |
| For more information, see the following paper: | |
| The dice loss performance function measures the network's performance according to the dice coefficient between the input and target distributions | |
| The DropConnect layer is a regularizer that randomly with probability ratio sets the connection values to zero and scales the remaining elements by factor 1 /(1 - ratio) | |
| The dropout layer is a regularizer that randomly with probability 'ratio' sets input values to zero and scales the remaining elements by factor 1 / (1 - ratio) rather than during test time so as to keep the expected sum same | |
| The earth mover distance function measures the network's performance according to the Kantorovich-Rubinstein duality approximation | |
| The ELiSH function, defined by | |
| The Elliot function, defined by | |
| The ELU activation function, defined by | |
| The empty loss does nothing, letting the user calculate the loss outside the model | |
| An implementation of a faster version of the Fast LSTM network layer | |
| Implementation of a standard feed forward network | |
| Computes the two-dimensional convolution through fft | |
| The FlexibleReLU activation function, defined by | |
| The implementation of the standard GAN module | |
| The gaussian function, defined by | |
| This class is used to initialize weigth matrix with a gaussian | |
| The GELU function, defined by | |
| The glimpse layer returns a retina-like representation (down-scaled cropped images) of increasing scale around a given location in a given image | |
| This class is used to initialize the weight matrix with the Glorot Initialization method | |
| An implementation of a gru network layer | |
| Hard Shrink operator is defined as, | |
| The hard sigmoid function, defined by | |
| The Hard Tanh activation function, defined by | |
| This class is used to initialize weight matrix with the He initialization rule given by He et | |
| Implementation of the Highway layer | |
| The Hinge Embedding loss function is often used to compute the loss between y_true and y_pred | |
| The Huber loss is a loss function used in robust regression, that is less sensitive to outliers in data than the squared error loss | |
| The identity function, defined by | |
| This is a template class that can provide information about various initialization methods | |
| Initialization traits of the kathirvalavakumar subavath initialization rule | |
| Initialization traits of the Nguyen-Widrow initialization rule | |
| The Inverse Quadratic function, defined by | |
| Implementation of the Join module class | |
| This class is used to initialize the weight matrix with the method proposed by T | |
| The Kullback–Leibler divergence is often used for continuous distributions (direct regression) | |
| The L1 loss is a loss function that measures the mean absolute error (MAE) between each element in the input x and target y | |
| Declaration of the Layer Normalization class | |
| This is a template class that can provide information about various layers | |
| The LeakyReLU activation function, defined by | |
| This class is used to initialize weight matrix with the Lecun Normalization initialization rule | |
| Implementation of the Linear layer class | |
| Implementation of the Linear3D layer class | |
| Implementation of the LinearNoBias class | |
| The LiSHT function, defined by | |
| The Log-Hyperbolic-Cosine loss function is often used to improve variational auto encoder | |
| The logistic function, defined by | |
| Implementation of the log softmax layer | |
| The Lookup class stores word embeddings and retrieves them using tokens | |
| The L_p regularizer for arbitrary integer p | |
| Implementation of the LSTM module class | |
| Margin ranking loss measures the loss given inputs and a label vector with values of 1 or -1 | |
| Implementation of the MaxPooling layer | |
| The mean absolute percentage error performance function measures the network's performance according to the mean of the absolute difference between input and target divided by target | |
| The mean bias error performance function measures the network's performance according to the mean of errors | |
| Implementation of the MeanPooling | |
| The mean squared error performance function measures the network's performance according to the mean of squared errors | |
| The mean squared logarithmic error performance function measures the network's performance according to the mean of squared logarithmic errors | |
| Implementation of the MiniBatchDiscrimination layer | |
| The Mish function, defined by | |
| Multihead Attention allows the model to jointly attend to information from different representation subspaces at different positions | |
| Implementation of the multiply constant layer | |
| Implementation of the MultiplyMerge module class | |
| The Multi Quadratic function, defined by | |
| Computes the two-dimensional convolution | |
| Implementation of the negative log likelihood layer | |
| This class is used to initialize the network with the given initialization rule | |
| This class is used to initialize the weight matrix with the Nguyen-Widrow method | |
| Implementation of the NoisyLinear layer class | |
| Implementation of the NoRegularizer | |
| Implementation of the Normal Distribution function | |
| This class is used to initialize the weight matrix with the oivs method | |
| This class is used to initialize the weight matrix with the orthogonal matrix initialization | |
| Implementation of the OrthogonalRegularizer | |
| Implementation of the Padding module class | |
| The Poisson one function, defined by | |
| Implementation of the Poisson negative log likelihood loss | |
| Positional Encoding injects some information about the relative or absolute position of the tokens in the sequence | |
| The PReLU activation function, defined by (where alpha is trainable) | |
| The Quadratic function, defined by | |
| This class is used to initialize randomly the weight matrix | |
| Implementation of the Radial Basis Function layer | |
| The implementation of the RBM module | |
| The reconstruction loss performance function measures the network's performance equal to the negative log probability of the target with the input distribution | |
| The rectifier function, defined by | |
| Implementation of the RecurrentLayer class | |
| This class implements the Recurrent Model for Visual Attention, using a variety of possible layer implementations | |
| Implementation of the reinforce normal layer | |
| Implementation of the Reparametrization layer class | |
| Implementation of a standard recurrent neural network container | |
| The select module selects the specified column from a given input matrix | |
| Implementation of the Sequential class | |
| The SigmoidCrossEntropyError performance function measures the network's performance according to the cross-entropy function between the input and target distributions | |
| Implementation of the Softmax layer | |
| Implementation of the Softmin layer | |
| The softplus function, defined by | |
| Soft Shrink operator is defined as,
| |
| The softsign function, defined by | |
| Implementation of the SpatialDropout layer | |
| For more information, see the following paper: | |
| The Spline function, defined by | |
| For more information, see the following paper: | |
| Implementation of the subview layer | |
| Computes the two-dimensional convolution using singular value decomposition | |
| The swish function, defined by | |
| The tanh function, defined by | |
| Implementation of the Transposed Convolution class | |
| Declaration of the VirtualBatchNorm layer class | |
| Implementation of the variance reduced classification reinforcement layer | |
| Declaration of the WeightNorm layer class | |
| For more information, see the following paper: | |
| For more information, see the following paper: | |
| Provides a backtrace | |
| A static object whose constructor registers a parameter with the IO class | |
| Utility struct to return the type that CLI11 should accept for a given input type | |
| For vector types, CLI11 will accept a std::string, not an arma::Col<eT> (since it is not clear how to specify a vector on the command-line) | |
| For matrix types, CLI11 will accept a std::string, not an arma::mat (since it is not clear how to specify a matrix on the command-line) | |
| For row vector types, CLI11 will accept a std::string, not an arma::Row<eT> (since it is not clear how to specify a vector on the command-line) | |
| For matrix+dataset info types, we should accept a std::string | |
| The Go option class | |
| The Julia option class | |
| Used by the Markdown documentation generator to store multiple documentation objects, indexed by both the binding name (i.e | |
| The Markdown option class | |
| The Python option class | |
| The R option class | |
| A static object whose constructor registers a parameter with the IO class | |
| Ball bound encloses a set of points at a specific distance (radius) from a specific point (center) | |
| A class to obtain compile-time traits about BoundType classes | |
| A specialization of BoundTraits for this bound type | |
| A specialization of BoundTraits for this bound type | |
| The CellBound class describes a bound that consists of a number of hyperrectangles | |
| Hollow ball bound encloses a set of points at a specific distance (radius) from a specific point (center) except points at a specific distance from another point (the center of the hole) | |
| Hyper-rectangle bound for an L-metric | |
| Utility struct where Value is true if and only if the argument is of type LMetric | |
| Specialization for IsLMetric when the argument is of type LMetric | |
| This class performs average interpolation to generate interpolation weights for neighborhood-based collaborative filtering | |
| Implementation of the Batch SVD policy to act as a wrapper when accessing Batch SVD from within CFType | |
| Implementation of the Bias SVD policy to act as a wrapper when accessing Bias SVD from within CFType | |
| The model to save to disk | |
| This class implements Collaborative Filtering (CF) | |
| This normalization class performs a sequence of normalization methods on raw ratings | |
| Nearest neighbor search with cosine distance | |
| This class acts as a dummy class for passing as template parameter | |
| This normalization class performs item mean normalization on raw ratings | |
| Nearest neighbor search with L_p distance | |
| Implementation of the NMF policy to act as a wrapper when accessing NMF from within CFType | |
| This normalization class doesn't perform any normalization | |
| This normalization class performs overall mean normalization on raw ratings | |
| Nearest neighbor search with pearson distance (or furthest neighbor search with pearson correlation) | |
| Implementation of the Randomized SVD policy to act as a wrapper when accessing Randomized SVD from within CFType | |
| Implementation of regression-based interpolation method | |
| Implementation of the Regularized SVD policy to act as a wrapper when accessing Regularized SVD from within CFType | |
| With SimilarityInterpolation, interpolation weights are based on similarities between query user and its neighbors | |
| Implementation of the SVD complete incremental policy to act as a wrapper when accessing SVD complete decomposition from within CFType | |
| Implementation of the SVD incomplete incremental to act as a wrapper when accessing SVD incomplete incremental from within CFType | |
| Implementation of the SVDPlusPlus policy to act as a wrapper when accessing SVDPlusPlus from within CFType | |
| This class acts as the wrapper for all SVD factorizers which are incompatible with CF module | |
| This normalization class performs user mean normalization on raw ratings | |
| This normalization class performs z-score normalization on raw ratings | |
| The Accuracy is a metric of performance for classification algorithms that is equal to a proportion of correctly labeled test items among all ones for given test items | |
| An auxiliary class for cross-validation | |
F1 is a metric of performance for classification algorithms that for binary classification is equal to | |
| The class KFoldCV implements k-fold cross-validation for regression and classification algorithms | |
| MetaInfoExtractor is a tool for extracting meta information about a given machine learning algorithm | |
| The MeanSquaredError is a metric of performance for regression algorithms that is equal to the mean squared error between predicted values and ground truth (correct) values for given test items | |
Precision is a metric of performance for classification algorithms that for binary classification is equal to , where and are the numbers of true positives and false positives respectively | |
| The R2 Score is a metric of performance for regression algorithms that represents the proportion of variance (here y) that has been explained by the independent variables in the model | |
Recall is a metric of performance for classification algorithms that for binary classification is equal to , where and are the numbers of true positives and false negatives respectively | |
| A type function that selects a right method form | |
| The Silhouette Score is a metric of performance for clustering that represents the quality of clusters made as a result | |
| SimpleCV splits data into two sets - training and validation sets - and then runs training on the training set and evaluates performance on the validation set | |
| A wrapper struct for holding a Train form | |
| Definition of the BagOfWordsEncodingPolicy class | |
| The class is used to split a string into characters | |
| A simple custom imputation class | |
| Auxiliary information for a dataset, including mappings to/from strings (or other types) and the datatype of each dimension | |
| DicitonaryEnocdingPolicy is used as a helper class for StringEncoding | |
| Implements meta-data of images required by data::Load and data::Save for loading and saving images into arma::Mat | |
| Given a dataset of a particular datatype, replace user-specified missing value with a variable dependent on the StrategyType and MapperType | |
| IncrementPolicy is used as a helper class for DatasetMapper | |
| A complete-case analysis to remove the values containing mappedValue | |
| Load the csv file.This class use boost::spirit to implement the parser, please refer to following link http://theboostcpplibraries.com/boost.spirit for quick review | |
| A simple MaxAbs Scaler class | |
| A simple mean imputation class | |
| A simple Mean Normalization class | |
| This is a class implementation of simple median imputation | |
| A simple MinMax Scaler class | |
| MissingPolicy is used as a helper class for DatasetMapper | |
| A simple PCAWhitening class | |
| The model to save to disk | |
| Tokenizes a string using a set of delimiters | |
| A simple Standard Scaler class | |
| The class translates a set of strings into numbers using various encoding algorithms | |
| This class provides a dictionary interface for the purpose of string encoding | |
| This is a template struct that provides some information about various encoding policies | |
| The specialization provides some information about the dictionary encoding policy | |
| Definition of the TfIdfEncodingPolicy class | |
| A simple ZCAWhitening class | |
| DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a clustering technique described in the following paper: | |
| This class can be used to sequentially select the next point to use for DBSCAN | |
| This class can be used to randomly select the next point to use for DBSCAN | |
| This class implements a decision stump | |
| A density estimation tree is similar to both a decision tree and a space partitioning tree (like a kd-tree) | |
| This class is responsible for caching the path to each node of the tree | |
| A single multivariate Gaussian distribution with diagonal covariance | |
| A discrete distribution where the only observations are discrete observations | |
| This class represents the Gamma distribution | |
| A single multivariate Gaussian distribution | |
| The multivariate Laplace distribution centered at 0 has pdf | |
| A class that represents a univariate conditionally Gaussian distribution | |
| A statistic for use with mlpack trees, which stores the upper bound on distance to nearest neighbors and the component which this node belongs to | |
| Performs the MST calculation using the Dual-Tree Boruvka algorithm, using any type of tree | |
| An edge pair is simply two indices and a distance | |
| A Union-Find data structure | |
| An implementation of fast exact max-kernel search | |
| A utility struct to contain all the possible FastMKS models, for use by the mlpack_fastmks program | |
| The FastMKSRules class is a template helper class used by FastMKS class when performing exact max-kernel search | |
| The statistic used in trees with FastMKS | |
| Force a covariance matrix to be diagonal | |
| A Diagonal Gaussian Mixture Model | |
| Given a vector of eigenvalue ratios, ensure that the covariance matrix always has those eigenvalue ratios | |
| This class contains methods which can fit a GMM to observations using the EM algorithm | |
| A Gaussian Mixture Model (GMM) | |
| This class enforces no constraint on the covariance matrix | |
| Given a covariance matrix, force the matrix to be positive definite | |
| A class that represents a Hidden Markov Model with an arbitrary type of emission distribution | |
| A serializable HMM model that also stores the type | |
| This wrapper serves for adapting the interface of the cross-validation classes to the one that can be utilized by the mlpack optimizers | |
| A type function for deducing types of hyper-parameters from types of arguments in the Optimize method in HyperParameterTuner | |
| Defining DeduceHyperParameterTypes for the case when not all argument types have been processed, and the next one is the type of an argument that should be fixed | |
| Defining DeduceHyperParameterTypes for the case when not all argument types have been processed, and the next one (T) is a collection type or an arithmetic type | |
| A type function to check whether Type is a collection type (for that it should define value_type) | |
| A type function to deduce the result hyper-parameter type for ArgumentType | |
| A struct for storing information about a fixed argument | |
| The class HyperParameterTuner for the given MLAlgorithm utilizes the provided Optimizer to find the values of hyper-parameters that optimize the value of the given Metric | |
| A type function for checking whether the given type is PreFixedArg | |
| A struct for marking arguments as ones that should be fixed (it can be useful for the Optimize method of HyperParameterTuner) | |
| The specialization of the template for references | |
| Parses the command line for parameters and holds user-specified parameters | |
| The KDE class is a template class for performing Kernel Density Estimations | |
| A dual-tree traversal Rules class for cleaning used trees before performing kernel density estimation | |
| KDEDefaultParams contains the default input parameter values for KDE | |
| A dual-tree traversal Rules class for kernel density estimation | |
| Extra data for each node in the tree for the task of kernel density estimation | |
| KernelNormalizer holds a set of methods to normalize estimations applying in each case the appropiate kernel normalizer function | |
| The Cauchy kernel | |
| The cosine distance (or cosine similarity) | |
| The Epanechnikov kernel, defined as | |
| An example kernel function | |
| The standard Gaussian kernel | |
| Hyperbolic tangent kernel | |
| This is a template class that can provide information about various kernels | |
| Kernel traits for the Cauchy kernel | |
| Kernel traits for the cosine distance | |
| Kernel traits for the Epanechnikov kernel | |
| Kernel traits for the Gaussian kernel | |
| Kernel traits of the Laplacian kernel | |
| Kernel traits for the spherical kernel | |
| Kernel traits for the triangular kernel | |
| Implementation of the kmeans sampling scheme | |
| The standard Laplacian kernel | |
| The simple linear kernel (dot product) | |
| The simple polynomial kernel | |
| The p-spectrum string kernel | |
| The spherical kernel, which is 1 when the distance between the two argument points is less than or equal to the bandwidth, or 0 otherwise | |
| The trivially simple triangular kernel, defined by | |
| Policy which allows K-Means to create empty clusters without any error being reported | |
| An algorithm for an exact Lloyd iteration which simply uses dual-tree nearest-neighbor search to find the nearest centroid for each point in the dataset | |
| Policy which allows K-Means to "kill" empty clusters without any error being reported | |
| This class implements K-Means clustering, using a variety of possible implementations of Lloyd's algorithm | |
| When an empty cluster is detected, this class takes the point furthest from the centroid of the cluster with maximum variance as a new cluster | |
| This is an implementation of a single iteration of Lloyd's algorithm for k-means | |
| An implementation of Pelleg-Moore's 'blacklist' algorithm for k-means clustering | |
| The rules class for the single-tree Pelleg-Moore kd-tree traversal for k-means clustering | |
| A statistic for trees which holds the blacklist for Pelleg-Moore k-means clustering (which represents the clusters that cannot possibly own any points in a node) | |
| A very simple partitioner which partitions the data randomly into the number of desired clusters | |
| A refined approach for choosing initial points for k-means clustering | |
| This class performs kernel principal components analysis (Kernel PCA), for a given kernel | |
| An implementation of Local Coordinate Coding (LCC) that codes data which approximately lives on a manifold using a variation of l1-norm regularized sparse coding; in LCC, the penalty on the absolute value of each point's coefficient for each atom is weighted by the squared distance of that point to that atom | |
| Interface for generating distance based constraints on a given dataset, provided corresponding true labels and a quantity parameter (k) are specified | |
| An implementation of Large Margin nearest neighbor metric learning technique | |
| The Large Margin Nearest Neighbors function | |
| Provides a convenient way to give formatted output | |
| Transform the columns of the given matrix into a block format | |
| Simple real-valued range | |
| This class implements the popular nuclear norm minimization heuristic for matrix completion problems | |
| This class implements mean shift clustering | |
| BLEU, or the Bilingual Evaluation Understudy, is an algorithm for evaluating the quality of text which has been machine translated from one natural language to another | |
| Definition of Intersection over Union metric | |
| The inner product metric, IPMetric, takes a given Mercer kernel (KernelType), and when Evaluate() is called, returns the distance between the two points in kernel space: | |
| The L_p metric for arbitrary integer p, with an option to take the root | |
| The Mahalanobis distance, which is essentially a stretched Euclidean distance | |
| Definition of Non Maximal Supression | |
| Meant to provide a good abstraction for users | |
| The simple Naive Bayes classifier | |
| An implementation of Neighborhood Components Analysis, both a linear dimensionality reduction technique and a distance learning technique | |
| The "softmax" stochastic neighbor assignment probability function | |
| This class implements the necessary methods for the SortPolicy template parameter of the NeighborSearch class | |
| The LSHSearch class; this class builds a hash on the reference set and uses this hash to compute the distance-approximate nearest-neighbors of the given queries | |
| This class implements the necessary methods for the SortPolicy template parameter of the NeighborSearch class | |
| The NeighborSearch class is a template class for performing distance-based neighbor searches | |
| The NeighborSearchRules class is a template helper class used by NeighborSearch class when performing distance-based neighbor searches | |
| Compare two candidates based on the distance | |
| Extra data for each node in the tree | |
| The NSModel class provides an easy way to serialize a model, abstracts away the different types of trees, and also reflects the NeighborSearch API | |
| The RAModel class provides an abstraction for the RASearch class, abstracting away the TreeType parameter and allowing it to be specified at runtime in this class | |
| Extra data for each node in the tree | |
| The RASearch class: This class provides a generic manner to perform rank-approximate search via random-sampling | |
| The RASearchRules class is a template helper class used by RASearch class when performing rank-approximate search via random-sampling | |
| A sparse autoencoder is a neural network whose aim to learn compressed representations of the data, typically for dimensionality reduction, with a constraint on the activity of the neurons in the network | |
| This is a class for the sparse autoencoder objective function | |
| Implementation of the exact SVD policy | |
| This class implements principal components analysis (PCA) | |
| Implementation of the QUIC-SVD policy | |
| Implementation of the randomized block krylov SVD policy | |
| Implementation of the randomized SVD policy | |
| This class implements a simple perceptron (i.e., a single layer neural network) | |
| This class is used to initialize weights for the weightVectors matrix in a random manner | |
| This class is used to initialize the matrix weightVectors to zero | |
| An implementation of RADICAL, an algorithm for independent component analysis (ICA) | |
| The RangeSearch class is a template class for performing range searches | |
| The RangeSearchRules class is a template helper class used by RangeSearch class when performing range searches | |
| Statistic class for RangeSearch, to be set to the StatisticType of the tree type that range search is being performed with | |
A Bayesian approach to the maximum likelihood estimation of the parameters of the linear regression model | |
| An implementation of LARS, a stage-wise homotopy-based algorithm for l1-regularized linear regression (LASSO) and l1+l2 regularized linear regression (Elastic Net) | |
| A simple linear regression algorithm using ordinary least squares | |
| The LogisticRegression class implements an L2-regularized logistic regression model, and supports training with multiple optimizers and classification | |
| The log-likelihood function for the logistic regression objective function | |
| Softmax Regression is a classifier which can be used for classification when the data available can take two or more class values | |
| Implementation of Acrobot game | |
| Wrapper of various asynchronous learning algorithms, e.g | |
| Implementation of Cart Pole task | |
| Implementation of action of Cart Pole | |
| Implementation of the state of Cart Pole | |
| Implementation of the Categorical Deep Q-Learning network | |
| To use the dummy environment, one may start by specifying the state and action dimensions | |
| Implementation of continuous action | |
| Implementation of state of the dummy environment | |
| Implementation of Continuous Double Pole Cart Balancing task | |
| Implementation of action of Continuous Double Pole Cart | |
| Implementation of the state of Continuous Double Pole Cart | |
| Implementation of Continuous Mountain Car task | |
| Implementation of action of Continuous Mountain Car | |
| Implementation of state of Continuous Mountain Car | |
| To use the dummy environment, one may start by specifying the state and action dimensions | |
| Implementation of discrete action | |
| Implementation of state of the dummy environment | |
| Implementation of Double Pole Cart Balancing task | |
| Implementation of action of Double Pole Cart | |
| Implementation of the state of Double Pole Cart | |
| Implementation of the Dueling Deep Q-Learning network | |
| Implementation for epsilon greedy policy | |
| Implementation of Mountain Car task | |
| Implementation of action of Mountain Car | |
| Implementation of state of Mountain Car | |
| Forward declaration of NStepQLearningWorker | |
| Forward declaration of OneStepQLearningWorker | |
| Forward declaration of OneStepSarsaWorker | |
| Implementation of Pendulum task | |
| Implementation of action of Pendulum | |
| Implementation of state of Pendulum | |
| Implementation of prioritized experience replay | |
| Implementation of various Q-Learning algorithms, such as DQN, double DQN | |
| Implementation of random experience replay | |
Interface for clipping the reward to some value between the specified maximum and minimum value (Clipping here is implemented as .) | |
| Implementation of Soft Actor-Critic, a model-free off-policy actor-critic based deep reinforcement learning algorithm | |
| Implementation of SumTree | |
| A data-dependent random dictionary initializer for SparseCoding | |
| A DictionaryInitializer for SparseCoding which does not initialize anything; it is useful for when the dictionary is already known and will be set with SparseCoding::Dictionary() | |
| A DictionaryInitializer for use with the SparseCoding class | |
| An implementation of Sparse Coding with Dictionary Learning that achieves sparsity via an l1-norm regularizer on the codes (LASSO) or an (l1+l2)-norm regularizer on the codes (the Elastic Net) | |
| Bias SVD is an improvement on Regularized SVD which is a matrix factorization techniques | |
| This class contains methods which are used to calculate the cost of BiasSVD's objective function, to calculate gradient of parameters with respect to the objective function, etc | |
| QUIC-SVD is a matrix factorization technique, which operates in a subspace such that A's approximation in that subspace has minimum error(A being the data matrix) | |
| Randomized block krylov SVD is a matrix factorization that is based on randomized matrix approximation techniques, developed in in "Randomized Block Krylov Methods for Stronger and Faster Approximate Singular Value Decomposition" | |
| Randomized SVD is a matrix factorization that is based on randomized matrix approximation techniques, developed in in "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions" | |
| Regularized SVD is a matrix factorization technique that seeks to reduce the error on the training set, that is on the examples for which the ratings have been provided by the users | |
| The data is stored in a matrix of type MatType, so that this class can be used with both dense and sparse matrix types | |
| SVD++ is a matrix decomposition tenique used in collaborative filtering | |
| This class contains methods which are used to calculate the cost of SVD++'s objective function, to calculate gradient of parameters with respect to the objective function, etc | |
| The LinearSVM class implements an L2-regularized support vector machine model, and supports training with multiple optimizers and classification | |
| The hinge loss function for the linear SVM objective function | |
| The timer class provides a way for mlpack methods to be timed | |
| The AllCategoricalSplit is a splitting function that will split categorical features into many children: one child for each category | |
| This dimension selection policy allows any dimension to be selected for splitting | |
| AxisParallelProjVector defines an axis-parallel projection vector | |
| The BestBinaryNumericSplit is a splitting function for decision trees that will exhaustively search a numeric dimension for the best binary split | |
| The BinaryNumericSplit class implements the numeric feature splitting strategy devised by Gama, Rocha, and Medas in the following paper: | |
| A binary space partitioning tree, such as a KD-tree or a ball tree | |
| A dual-tree traverser for binary space trees; see dual_tree_traverser.hpp | |
| A single-tree traverser for binary space trees; see single_tree_traverser.hpp for implementation | |
| A cover tree is a tree specifically designed to speed up nearest-neighbor computation in high-dimensional spaces | |
| A dual-tree cover tree traverser; see dual_tree_traverser.hpp | |
| A single-tree cover tree traverser; see single_tree_traverser.hpp for implementation | |
| The DiscreteHilbertValue class stores Hilbert values for all of the points in a RectangleTree node, and calculates Hilbert values for new points | |
| Empty statistic if you are not interested in storing statistics in your tree | |
| This is not an actual space tree but instead an example tree that exists to show and document all the functions that mlpack trees must implement | |
| This class is meant to be used as a choice for the policy class RootPointPolicy of the CoverTree class | |
| The Gini gain, a measure of set purity usable as a fitness function (FitnessFunction) for decision trees | |
| This class chooses the best child of a node in a Hilbert R tree when inserting a new point | |
| The splitting procedure for the Hilbert R tree | |
| This is the standard Hoeffding-bound categorical feature proposed in the paper below: | |
| The HoeffdingNumericSplit class implements the numeric feature splitting strategy alluded to by Domingos and Hulten in the following paper: | |
| The HoeffdingTree object represents all of the necessary information for a Hoeffding-bound-based decision tree | |
| This class is a serializable Hoeffding tree model that can hold four different types of Hoeffding trees | |
| HyperplaneBase defines a splitting hyperplane based on a projection vector and projection value | |
| The standard information gain criterion, used for calculating gain in decision trees | |
| A binary space partitioning tree node is split into its left and right child | |
| An information about the partition | |
| A binary space partitioning tree node is split into its left and right child | |
| A struct that contains an information about the split | |
| The MinimalCoverageSweep class finds a partition along which we can split a node according to the coverage of two resulting nodes | |
| A struct that provides the type of the sweep cost | |
| The MinimalSplitsNumberSweep class finds a partition along which we can split a node according to the number of required splits of the node | |
| A struct that provides the type of the sweep cost | |
| This dimension selection policy allows the selection from a few random dimensions | |
| A dual-tree traverser; see dual_tree_traverser.hpp | |
| A single-tree traverser; see single_tree_traverser.hpp | |
| ProjVector defines a general projection vector (not necessarily axis-parallel) | |
| This dimension selection policy only selects one single random dimension | |
| A rectangle type tree tree, such as an R-tree or X-tree | |
| A dual tree traverser for rectangle type trees | |
| A single traverser for rectangle type trees | |
| The RPlusPlusTreeSplitPolicy helps to determine the subtree into which we should insert a child of an intermediate node that is being split | |
| The RPlusTreeSplit class performs the split process of a node on overflow | |
| The RPlusPlusTreeSplitPolicy helps to determine the subtree into which we should insert a child of an intermediate node that is being split | |
| This class splits a node by a random hyperplane | |
| An information about the partition | |
| This class splits a binary space tree | |
| An information about the partition | |
| When descending a RectangleTree to insert a point, we need to have a way to choose a child node when the point isn't enclosed by any of them | |
| A Rectangle Tree has new points inserted at the bottom | |
| When descending a RectangleTree to insert a point, we need to have a way to choose a child node when the point isn't enclosed by any of them | |
| A Rectangle Tree has new points inserted at the bottom | |
| A hybrid spill tree is a variant of binary space trees in which the children of a node can "spill over" each other, and contain shared datapoints | |
| A generic dual-tree traverser for hybrid spill trees; see spill_dual_tree_traverser.hpp for implementation | |
| A generic single-tree traverser for hybrid spill trees; see spill_single_tree_traverser.hpp for implementation | |
| The TraversalInfo class holds traversal information which is used in dual-tree (and single-tree) traversals | |
| The TreeTraits class provides compile-time information on the characteristics of a given tree type | |
| This is a specialization of the TreeType class to the BallTree tree type | |
| This is a specialization of the TreeType class to the UBTree tree type | |
| This is a specialization of the TreeType class to an arbitrary tree with HollowBallBound (currently only the vantage point tree is supported) | |
| This is a specialization of the TreeType class to the max-split random projection tree | |
| This is a specialization of the TreeType class to the mean-split random projection tree | |
| This is a specialization of the TreeTraits class to the BinarySpaceTree tree type | |
| The specialization of the TreeTraits class for the CoverTree tree type | |
| This is a specialization of the TreeTraits class to the Octree tree type | |
| Since the R+/R++ tree can not have overlapping children, we should define traits for the R+/R++ tree | |
| This is a specialization of the TreeType class to the RectangleTree tree type | |
| This is a specialization of the TreeType class to the SpillTree tree type | |
| Split a node into two parts according to the median address of points contained in the node | |
| The class splits a binary space partitioning tree node according to the median distance to the vantage point | |
| A struct that contains an information about the split | |
| The XTreeAuxiliaryInformation class provides information specific to X trees for each node in a RectangleTree | |
| The X tree requires that the tree records it's "split history" | |
| A Rectangle Tree has new points inserted at the bottom | |
| This structure holds all of the information about bindings documentation | |
| Metaprogramming structure for vector detection | |
| Metaprogramming structure for vector detection | |
| Used for Log::Debug when not compiled with debugging symbols | |
| This structure holds all of the information about a single parameter, including its value (which is set when ParseCommandLine() is called) | |
| Allows us to output to an ostream with a prefix at the beginning of each line, in the same way we would output to cout or cerr | |
1.8.5