mlpack  3.4.2
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Pages
Class Hierarchy

Go to the graphical class hierarchy

This inheritance list is sorted roughly, but not completely, alphabetically:
[detail level 12]
oCAdaBoost< perceptron::Perceptron<> >
oCAdaBoost< tree::ID3DecisionStump >
oCversion< mlpack::adaboost::AdaBoost< WeakLearnerType, MatType > >
oCversion< mlpack::ann::AddMerge< InputDataType, OutputDataType, CustomLayers...> >
oCversion< mlpack::ann::AtrousConvolution< ForwardConvolutionRule, BackwardConvolutionRule, GradientConvolutionRule, InputDataType, OutputDataType > >
oCversion< mlpack::ann::BRNN< OutputLayerType, MergeLayerType, MergeOutputType, InitializationRuleType, CustomLayer...> >
oCversion< mlpack::ann::Convolution< ForwardConvolutionRule, BackwardConvolutionRule, GradientConvolutionRule, InputDataType, OutputDataType > >
oCversion< mlpack::ann::FFN< OutputLayerType, InitializationRuleType, CustomLayer...> >
oCversion< mlpack::ann::RNN< OutputLayerType, InitializationRuleType, CustomLayer...> >
oCversion< mlpack::ann::Sequential< InputDataType, OutputDataType, Residual, CustomLayers...> >
oCversion< mlpack::ann::TransposedConvolution< ForwardConvolutionRule, BackwardConvolutionRule, GradientConvolutionRule, InputDataType, OutputDataType > >
oCversion< mlpack::kde::KDE< KernelType, MetricType, MatType, TreeType, DualTreeTraversalType, SingleTreeTraversalType > >
oCstatic_visitor
oCtemplateAuxiliarySplitInfo< ElemType >
oCDatasetMapper< data::IncrementPolicy, double >
oCFastMKS< kernel::CosineDistance >
oCFastMKS< kernel::EpanechnikovKernel >
oCFastMKS< kernel::GaussianKernel >
oCFastMKS< kernel::HyperbolicTangentKernel >
oCFastMKS< kernel::LinearKernel >
oCFastMKS< kernel::PolynomialKernel >
oCFastMKS< kernel::TriangularKernel >
oCHMM< distribution::DiscreteDistribution >
oCHMM< distribution::GaussianDistribution >
oCHMM< distribution::RegressionDistribution >
oCHMM< gmm::DiagonalGMM >
oCHMM< gmm::GMM >
oCHRectBound< metric::EuclideanDistance, ElemType >
oCHRectBound< MetricType >
oCInitHMMModel
oCIPMetric< kernel::CosineDistance >
oCIPMetric< kernel::EpanechnikovKernel >
oCIPMetric< kernel::GaussianKernel >
oCIPMetric< kernel::HyperbolicTangentKernel >
oCIPMetric< kernel::LinearKernel >
oCIPMetric< kernel::PolynomialKernel >
oCIPMetric< kernel::TriangularKernel >
oCIsVector< VecType >If value == true, then VecType is some sort of Armadillo vector or subview
oCIsVector< arma::Col< eT > >
oCIsVector< arma::Row< eT > >
oCIsVector< arma::SpCol< eT > >
oCIsVector< arma::SpRow< eT > >
oCIsVector< arma::SpSubview< eT > >
oCIsVector< arma::subview_col< eT > >
oCIsVector< arma::subview_row< eT > >
oCAdaBoost< WeakLearnerType, MatType >The AdaBoost class
oCAdaBoostModelThe model to save to disk
oCAMF< TerminationPolicyType, InitializationRuleType, UpdateRuleType >This class implements AMF (alternating matrix factorization) on the given matrix V
oCAverageInitializationThis initialization rule initializes matrix W and H to root of the average of V, perturbed with uniform noise
oCCompleteIncrementalTermination< TerminationPolicy >This class acts as a wrapper for basic termination policies to be used by SVDCompleteIncrementalLearning
oCGivenInitializationThis initialization rule for AMF simply fills the W and H matrices with the matrices given to the constructor of this object
oCIncompleteIncrementalTermination< TerminationPolicy >This class acts as a wrapper for basic termination policies to be used by SVDIncompleteIncrementalLearning
oCMaxIterationTerminationThis termination policy only terminates when the maximum number of iterations has been reached
oCMergeInitialization< WInitializationRuleType, HInitializationRuleType >This initialization rule for AMF simply takes in two initialization rules, and initialize W with the first rule and H with the second rule
oCNMFALSUpdateThis class implements a method titled 'Alternating Least Squares' described in the following paper:
oCNMFMultiplicativeDistanceUpdateThe multiplicative distance update rules for matrices W and H
oCNMFMultiplicativeDivergenceUpdateThis follows a method described in the paper 'Algorithms for Non-negative
oCRandomAcolInitialization< columnsToAverage >This class initializes the W matrix of the AMF algorithm by averaging p randomly chosen columns of V
oCRandomInitializationThis initialization rule for AMF simply fills the W and H matrices with uniform random noise in [0, 1]
oCSimpleResidueTerminationThis class implements a simple residue-based termination policy
oCSimpleToleranceTermination< MatType >This class implements residue tolerance termination policy
oCSVDBatchLearningThis class implements SVD batch learning with momentum
oCSVDCompleteIncrementalLearning< MatType >This class computes SVD using complete incremental batch learning, as described in the following paper:
oCSVDCompleteIncrementalLearning< arma::sp_mat >TODO : Merge this template specialized function for sparse matrix using common row_col_iterator
oCSVDIncompleteIncrementalLearningThis class computes SVD using incomplete incremental batch learning, as described in the following paper:
oCValidationRMSETermination< MatType >This class implements validation termination policy based on RMSE index
oCAdaptiveMaxPooling< InputDataType, OutputDataType >Implementation of the AdaptiveMaxPooling layer
oCAdaptiveMeanPooling< InputDataType, OutputDataType >Implementation of the AdaptiveMeanPooling
oCAdd< InputDataType, OutputDataType >Implementation of the Add module class
oCAddMerge< InputDataType, OutputDataType, CustomLayers >Implementation of the AddMerge module class
oCAlphaDropout< InputDataType, OutputDataType >The alpha - dropout layer is a regularizer that randomly with probability 'ratio' sets input values to alphaDash
oCAtrousConvolution< ForwardConvolutionRule, BackwardConvolutionRule, GradientConvolutionRule, InputDataType, OutputDataType >Implementation of the Atrous Convolution class
oCAddTaskGenerator of instances of the binary addition task
oCCopyTaskGenerator of instances of the binary sequence copy task
oCSortTaskGenerator of instances of the sequence sort task
oCBaseLayer< ActivationFunction, InputDataType, OutputDataType >Implementation of the base layer
oCBatchNorm< InputDataType, OutputDataType >Declaration of the Batch Normalization layer class
oCBernoulliDistribution< DataType >Multiple independent Bernoulli distributions
oCBilinearInterpolation< InputDataType, OutputDataType >Definition and Implementation of the Bilinear Interpolation Layer
oCBinaryRBMFor more information, see the following paper:
oCBRNN< OutputLayerType, MergeLayerType, MergeOutputType, InitializationRuleType, CustomLayers >Implementation of a standard bidirectional recurrent neural network container
oCCELU< InputDataType, OutputDataType >The CELU activation function, defined by
oCConcat< InputDataType, OutputDataType, CustomLayers >Implementation of the Concat class
oCConcatenate< InputDataType, OutputDataType >Implementation of the Concatenate module class
oCConcatPerformance< OutputLayerType, InputDataType, OutputDataType >Implementation of the concat performance class
oCConstant< InputDataType, OutputDataType >Implementation of the constant layer
oCConstInitializationThis class is used to initialize weight matrix with constant values
oCConvolution< ForwardConvolutionRule, BackwardConvolutionRule, GradientConvolutionRule, InputDataType, OutputDataType >Implementation of the Convolution class
oCCosineEmbeddingLoss< InputDataType, OutputDataType >Cosine Embedding Loss function is used for measuring whether two inputs are similar or dissimilar, using the cosine distance, and is typically used for learning nonlinear embeddings or semi-supervised learning
oCCReLU< InputDataType, OutputDataType >A concatenated ReLU has two outputs, one ReLU and one negative ReLU, concatenated together
oCCrossEntropyError< InputDataType, OutputDataType >The cross-entropy performance function measures the network's performance according to the cross-entropy between the input and target distributions
oCDCGANFor more information, see the following paper:
oCDiceLoss< InputDataType, OutputDataType >The dice loss performance function measures the network's performance according to the dice coefficient between the input and target distributions
oCDropConnect< InputDataType, OutputDataType >The DropConnect layer is a regularizer that randomly with probability ratio sets the connection values to zero and scales the remaining elements by factor 1 /(1 - ratio)
oCDropout< InputDataType, OutputDataType >The dropout layer is a regularizer that randomly with probability 'ratio' sets input values to zero and scales the remaining elements by factor 1 / (1 - ratio) rather than during test time so as to keep the expected sum same
oCEarthMoverDistance< InputDataType, OutputDataType >The earth mover distance function measures the network's performance according to the Kantorovich-Rubinstein duality approximation
oCElishFunctionThe ELiSH function, defined by
oCElliotFunctionThe Elliot function, defined by
oCELU< InputDataType, OutputDataType >The ELU activation function, defined by
oCEmptyLoss< InputDataType, OutputDataType >The empty loss does nothing, letting the user calculate the loss outside the model
oCFastLSTM< InputDataType, OutputDataType >An implementation of a faster version of the Fast LSTM network layer
oCFFN< OutputLayerType, InitializationRuleType, CustomLayers >Implementation of a standard feed forward network
oCFFTConvolution< BorderMode, padLastDim >Computes the two-dimensional convolution through fft
oCFlexibleReLU< InputDataType, OutputDataType >The FlexibleReLU activation function, defined by
oCFullConvolution
oCGAN< Model, InitializationRuleType, Noise, PolicyType >The implementation of the standard GAN module
oCGaussianFunctionThe gaussian function, defined by
oCGaussianInitializationThis class is used to initialize weigth matrix with a gaussian
oCGELUFunctionThe GELU function, defined by
oCGlimpse< InputDataType, OutputDataType >The glimpse layer returns a retina-like representation (down-scaled cropped images) of increasing scale around a given location in a given image
oCGlorotInitializationType< Uniform >This class is used to initialize the weight matrix with the Glorot Initialization method
oCGRU< InputDataType, OutputDataType >An implementation of a gru network layer
oCHardShrink< InputDataType, OutputDataType >Hard Shrink operator is defined as,
oCHardSigmoidFunctionThe hard sigmoid function, defined by
oCHardTanH< InputDataType, OutputDataType >The Hard Tanh activation function, defined by
oCHeInitializationThis class is used to initialize weight matrix with the He initialization rule given by He et
oCHighway< InputDataType, OutputDataType, CustomLayers >Implementation of the Highway layer
oCHingeEmbeddingLoss< InputDataType, OutputDataType >The Hinge Embedding loss function is often used to compute the loss between y_true and y_pred
oCHuberLoss< InputDataType, OutputDataType >The Huber loss is a loss function used in robust regression, that is less sensitive to outliers in data than the squared error loss
oCIdentityFunctionThe identity function, defined by
oCInitTraits< InitRuleType >This is a template class that can provide information about various initialization methods
oCInitTraits< KathirvalavakumarSubavathiInitialization >Initialization traits of the kathirvalavakumar subavath initialization rule
oCInitTraits< NguyenWidrowInitialization >Initialization traits of the Nguyen-Widrow initialization rule
oCInvQuadFunctionThe Inverse Quadratic function, defined by
oCJoin< InputDataType, OutputDataType >Implementation of the Join module class
oCKathirvalavakumarSubavathiInitializationThis class is used to initialize the weight matrix with the method proposed by T
oCKLDivergence< InputDataType, OutputDataType >The Kullback–Leibler divergence is often used for continuous distributions (direct regression)
oCL1Loss< InputDataType, OutputDataType >The L1 loss is a loss function that measures the mean absolute error (MAE) between each element in the input x and target y
oCLayerNorm< InputDataType, OutputDataType >Declaration of the Layer Normalization class
oCLayerTraits< LayerType >This is a template class that can provide information about various layers
oCLeakyReLU< InputDataType, OutputDataType >The LeakyReLU activation function, defined by
oCLecunNormalInitializationThis class is used to initialize weight matrix with the Lecun Normalization initialization rule
oCLinear< InputDataType, OutputDataType, RegularizerType >Implementation of the Linear layer class
oCLinear3D< InputDataType, OutputDataType, RegularizerType >Implementation of the Linear3D layer class
oCLinearNoBias< InputDataType, OutputDataType, RegularizerType >Implementation of the LinearNoBias class
oCLiSHTFunctionThe LiSHT function, defined by
oCLogCoshLoss< InputDataType, OutputDataType >The Log-Hyperbolic-Cosine loss function is often used to improve variational auto encoder
oCLogisticFunctionThe logistic function, defined by
oCLogSoftMax< InputDataType, OutputDataType >Implementation of the log softmax layer
oCLookup< InputDataType, OutputDataType >The Lookup class stores word embeddings and retrieves them using tokens
oCLRegularizer< TPower >The L_p regularizer for arbitrary integer p
oCLSTM< InputDataType, OutputDataType >Implementation of the LSTM module class
oCMarginRankingLoss< InputDataType, OutputDataType >Margin ranking loss measures the loss given inputs and a label vector with values of 1 or -1
oCMaxPooling< InputDataType, OutputDataType >Implementation of the MaxPooling layer
oCMaxPoolingRule
oCMeanAbsolutePercentageError< InputDataType, OutputDataType >The mean absolute percentage error performance function measures the network's performance according to the mean of the absolute difference between input and target divided by target
oCMeanBiasError< InputDataType, OutputDataType >The mean bias error performance function measures the network's performance according to the mean of errors
oCMeanPooling< InputDataType, OutputDataType >Implementation of the MeanPooling
oCMeanPoolingRule
oCMeanSquaredError< InputDataType, OutputDataType >The mean squared error performance function measures the network's performance according to the mean of squared errors
oCMeanSquaredLogarithmicError< InputDataType, OutputDataType >The mean squared logarithmic error performance function measures the network's performance according to the mean of squared logarithmic errors
oCMiniBatchDiscrimination< InputDataType, OutputDataType >Implementation of the MiniBatchDiscrimination layer
oCMishFunctionThe Mish function, defined by
oCMultiheadAttention< InputDataType, OutputDataType, RegularizerType >Multihead Attention allows the model to jointly attend to information from different representation subspaces at different positions
oCMultiplyConstant< InputDataType, OutputDataType >Implementation of the multiply constant layer
oCMultiplyMerge< InputDataType, OutputDataType, CustomLayers >Implementation of the MultiplyMerge module class
oCMultiQuadFunctionThe Multi Quadratic function, defined by
oCNaiveConvolution< BorderMode >Computes the two-dimensional convolution
oCNegativeLogLikelihood< InputDataType, OutputDataType >Implementation of the negative log likelihood layer
oCNetworkInitialization< InitializationRuleType, CustomLayers >This class is used to initialize the network with the given initialization rule
oCNguyenWidrowInitializationThis class is used to initialize the weight matrix with the Nguyen-Widrow method
oCNoisyLinear< InputDataType, OutputDataType >Implementation of the NoisyLinear layer class
oCNoRegularizerImplementation of the NoRegularizer
oCNormalDistribution< DataType >Implementation of the Normal Distribution function
oCOivsInitialization< ActivationFunction >This class is used to initialize the weight matrix with the oivs method
oCOrthogonalInitializationThis class is used to initialize the weight matrix with the orthogonal matrix initialization
oCOrthogonalRegularizerImplementation of the OrthogonalRegularizer
oCPadding< InputDataType, OutputDataType >Implementation of the Padding module class
oCPoisson1FunctionThe Poisson one function, defined by
oCPoissonNLLLoss< InputDataType, OutputDataType >Implementation of the Poisson negative log likelihood loss
oCPositionalEncoding< InputDataType, OutputDataType >Positional Encoding injects some information about the relative or absolute position of the tokens in the sequence
oCPReLU< InputDataType, OutputDataType >The PReLU activation function, defined by (where alpha is trainable)
oCQuadraticFunctionThe Quadratic function, defined by
oCRandomInitializationThis class is used to initialize randomly the weight matrix
oCRBF< InputDataType, OutputDataType, Activation >Implementation of the Radial Basis Function layer
oCRBM< InitializationRuleType, DataType, PolicyType >The implementation of the RBM module
oCReconstructionLoss< InputDataType, OutputDataType, DistType >The reconstruction loss performance function measures the network's performance equal to the negative log probability of the target with the input distribution
oCRectifierFunctionThe rectifier function, defined by
oCRecurrent< InputDataType, OutputDataType, CustomLayers >Implementation of the RecurrentLayer class
oCRecurrentAttention< InputDataType, OutputDataType >This class implements the Recurrent Model for Visual Attention, using a variety of possible layer implementations
oCReinforceNormal< InputDataType, OutputDataType >Implementation of the reinforce normal layer
oCReparametrization< InputDataType, OutputDataType >Implementation of the Reparametrization layer class
oCRNN< OutputLayerType, InitializationRuleType, CustomLayers >Implementation of a standard recurrent neural network container
oCSelect< InputDataType, OutputDataType >The select module selects the specified column from a given input matrix
oCSequential< InputDataType, OutputDataType, residual, CustomLayers >Implementation of the Sequential class
oCSigmoidCrossEntropyError< InputDataType, OutputDataType >The SigmoidCrossEntropyError performance function measures the network's performance according to the cross-entropy function between the input and target distributions
oCSoftMarginLoss< InputDataType, OutputDataType >
oCSoftmax< InputDataType, OutputDataType >Implementation of the Softmax layer
oCSoftmin< InputDataType, OutputDataType >Implementation of the Softmin layer
oCSoftplusFunctionThe softplus function, defined by
oCSoftShrink< InputDataType, OutputDataType >Soft Shrink operator is defined as,

\begin{eqnarray*} f(x) &=& \begin{cases} x - \lambda & : x > \lambda \\ x + \lambda & : x < -\lambda \\ 0 & : otherwise. \\ \end{cases} \\ f'(x) &=& \begin{cases} 1 & : x > \lambda \\ 1 & : x < -\lambda \\ 0 & : otherwise. \end{cases} \end{eqnarray*}

oCSoftsignFunctionThe softsign function, defined by
oCSpatialDropout< InputDataType, OutputDataType >Implementation of the SpatialDropout layer
oCSpikeSlabRBMFor more information, see the following paper:
oCSplineFunctionThe Spline function, defined by
oCStandardGANFor more information, see the following paper:
oCSubview< InputDataType, OutputDataType >Implementation of the subview layer
oCSVDConvolution< BorderMode >Computes the two-dimensional convolution using singular value decomposition
oCSwishFunctionThe swish function, defined by
oCTanhFunctionThe tanh function, defined by
oCTransposedConvolution< ForwardConvolutionRule, BackwardConvolutionRule, GradientConvolutionRule, InputDataType, OutputDataType >Implementation of the Transposed Convolution class
oCValidConvolution
oCVirtualBatchNorm< InputDataType, OutputDataType >Declaration of the VirtualBatchNorm layer class
oCVRClassReward< InputDataType, OutputDataType >Implementation of the variance reduced classification reinforcement layer
oCWeightNorm< InputDataType, OutputDataType, CustomLayers >Declaration of the WeightNorm layer class
oCWGANFor more information, see the following paper:
oCWGANGPFor more information, see the following paper:
oCBacktraceProvides a backtrace
oCCLIOption< N >A static object whose constructor registers a parameter with the IO class
oCParameterType< T >Utility struct to return the type that CLI11 should accept for a given input type
oCParameterType< arma::Col< eT > >For vector types, CLI11 will accept a std::string, not an arma::Col<eT> (since it is not clear how to specify a vector on the command-line)
oCParameterType< arma::Mat< eT > >For matrix types, CLI11 will accept a std::string, not an arma::mat (since it is not clear how to specify a matrix on the command-line)
oCParameterType< arma::Row< eT > >For row vector types, CLI11 will accept a std::string, not an arma::Row<eT> (since it is not clear how to specify a vector on the command-line)
oCParameterType< std::tuple< mlpack::data::DatasetMapper< PolicyType, std::string >, arma::Mat< eT > > >For matrix+dataset info types, we should accept a std::string
oCParameterTypeDeducer< HasSerialize, T >
oCParameterTypeDeducer< true, T >
oCGoOption< T >The Go option class
oCJuliaOption< T >The Julia option class
oCBindingInfoUsed by the Markdown documentation generator to store multiple documentation objects, indexed by both the binding name (i.e
oCExampleWrapper
oCLongDescriptionWrapper
oCMDOption< T >The Markdown option class
oCProgramNameWrapper
oCSeeAlsoWrapper
oCShortDescriptionWrapper
oCPyOption< T >The Python option class
oCROption< T >The R option class
oCTestOption< N >A static object whose constructor registers a parameter with the IO class
oCBallBound< MetricType, VecType >Ball bound encloses a set of points at a specific distance (radius) from a specific point (center)
oCBoundTraits< BoundType >A class to obtain compile-time traits about BoundType classes
oCBoundTraits< BallBound< MetricType, VecType > >A specialization of BoundTraits for this bound type
oCBoundTraits< CellBound< MetricType, ElemType > >
oCBoundTraits< HollowBallBound< MetricType, ElemType > >A specialization of BoundTraits for this bound type
oCBoundTraits< HRectBound< MetricType, ElemType > >
oCCellBound< MetricType, ElemType >The CellBound class describes a bound that consists of a number of hyperrectangles
oCHollowBallBound< TMetricType, ElemType >Hollow ball bound encloses a set of points at a specific distance (radius) from a specific point (center) except points at a specific distance from another point (the center of the hole)
oCHRectBound< MetricType, ElemType >Hyper-rectangle bound for an L-metric
oCIsLMetric< MetricType >Utility struct where Value is true if and only if the argument is of type LMetric
oCIsLMetric< metric::LMetric< Power, TakeRoot > >Specialization for IsLMetric when the argument is of type LMetric
oCAverageInterpolationThis class performs average interpolation to generate interpolation weights for neighborhood-based collaborative filtering
oCBatchSVDPolicyImplementation of the Batch SVD policy to act as a wrapper when accessing Batch SVD from within CFType
oCBiasSVDPolicyImplementation of the Bias SVD policy to act as a wrapper when accessing Bias SVD from within CFType
oCCFModelThe model to save to disk
oCCFType< DecompositionPolicy, NormalizationType >This class implements Collaborative Filtering (CF)
oCCombinedNormalization< NormalizationTypes >This normalization class performs a sequence of normalization methods on raw ratings
oCCosineSearchNearest neighbor search with cosine distance
oCDummyClassThis class acts as a dummy class for passing as template parameter
oCItemMeanNormalizationThis normalization class performs item mean normalization on raw ratings
oCLMetricSearch< TPower >Nearest neighbor search with L_p distance
oCNMFPolicyImplementation of the NMF policy to act as a wrapper when accessing NMF from within CFType
oCNoNormalizationThis normalization class doesn't perform any normalization
oCOverallMeanNormalizationThis normalization class performs overall mean normalization on raw ratings
oCPearsonSearchNearest neighbor search with pearson distance (or furthest neighbor search with pearson correlation)
oCRandomizedSVDPolicyImplementation of the Randomized SVD policy to act as a wrapper when accessing Randomized SVD from within CFType
oCRegressionInterpolationImplementation of regression-based interpolation method
oCRegSVDPolicyImplementation of the Regularized SVD policy to act as a wrapper when accessing Regularized SVD from within CFType
oCSimilarityInterpolationWith SimilarityInterpolation, interpolation weights are based on similarities between query user and its neighbors
oCSVDCompletePolicyImplementation of the SVD complete incremental policy to act as a wrapper when accessing SVD complete decomposition from within CFType
oCSVDIncompletePolicyImplementation of the SVD incomplete incremental to act as a wrapper when accessing SVD incomplete incremental from within CFType
oCSVDPlusPlusPolicyImplementation of the SVDPlusPlus policy to act as a wrapper when accessing SVDPlusPlus from within CFType
oCSVDWrapper< Factorizer >This class acts as the wrapper for all SVD factorizers which are incompatible with CF module
oCUserMeanNormalizationThis normalization class performs user mean normalization on raw ratings
oCZScoreNormalizationThis normalization class performs z-score normalization on raw ratings
oCAccuracyThe Accuracy is a metric of performance for classification algorithms that is equal to a proportion of correctly labeled test items among all ones for given test items
oCCVBase< MLAlgorithm, MatType, PredictionsType, WeightsType >An auxiliary class for cross-validation
oCF1< AS, PositiveClass >F1 is a metric of performance for classification algorithms that for binary classification is equal to $ 2 * precision * recall / (precision + recall) $
oCKFoldCV< MLAlgorithm, Metric, MatType, PredictionsType, WeightsType >The class KFoldCV implements k-fold cross-validation for regression and classification algorithms
oCMetaInfoExtractor< MLAlgorithm, MT, PT, WT >MetaInfoExtractor is a tool for extracting meta information about a given machine learning algorithm
oCMSEThe MeanSquaredError is a metric of performance for regression algorithms that is equal to the mean squared error between predicted values and ground truth (correct) values for given test items
oCNotFoundMethodForm
oCPrecision< AS, PositiveClass >Precision is a metric of performance for classification algorithms that for binary classification is equal to $ tp / (tp + fp) $, where $ tp $ and $ fp $ are the numbers of true positives and false positives respectively
oCR2ScoreThe R2 Score is a metric of performance for regression algorithms that represents the proportion of variance (here y) that has been explained by the independent variables in the model
oCRecall< AS, PositiveClass >Recall is a metric of performance for classification algorithms that for binary classification is equal to $ tp / (tp + fn) $, where $ tp $ and $ fn $ are the numbers of true positives and false negatives respectively
oCSelectMethodForm< MLAlgorithm, HMFs >A type function that selects a right method form
oCSelectMethodForm< MLAlgorithm >
oCSelectMethodForm< MLAlgorithm >::From< Forms >
oCSelectMethodForm< MLAlgorithm, HasMethodForm, HMFs...>
oCSelectMethodForm< MLAlgorithm, HasMethodForm, HMFs...>::From< Forms >
oCSilhouetteScoreThe Silhouette Score is a metric of performance for clustering that represents the quality of clusters made as a result
oCSimpleCV< MLAlgorithm, Metric, MatType, PredictionsType, WeightsType >SimpleCV splits data into two sets - training and validation sets - and then runs training on the training set and evaluates performance on the validation set
oCTrainForm< MatType, PredictionsType, WeightsType, DatasetInfo, NumClasses >A wrapper struct for holding a Train form
oCTrainFormBase4< PT, WT, T1, T2 >
oCTrainFormBase5< PT, WT, T1, T2, T3 >
oCTrainFormBase6< PT, WT, T1, T2, T3, T4 >
oCTrainFormBase7< PT, WT, T1, T2, T3, T4, T5 >
oCBagOfWordsEncodingPolicyDefinition of the BagOfWordsEncodingPolicy class
oCCharExtractThe class is used to split a string into characters
oCCustomImputation< T >A simple custom imputation class
oCDatasetMapper< PolicyType, InputType >Auxiliary information for a dataset, including mappings to/from strings (or other types) and the datatype of each dimension
oCDictionaryEncodingPolicyDicitonaryEnocdingPolicy is used as a helper class for StringEncoding
oCHasSerialize< T >
oCHasSerialize< T >::check< U, V, W >
oCHasSerializeFunction< T >
oCImageInfoImplements meta-data of images required by data::Load and data::Save for loading and saving images into arma::Mat
oCImputer< T, MapperType, StrategyType >Given a dataset of a particular datatype, replace user-specified missing value with a variable dependent on the StrategyType and MapperType
oCIncrementPolicyIncrementPolicy is used as a helper class for DatasetMapper
oCListwiseDeletion< T >A complete-case analysis to remove the values containing mappedValue
oCLoadCSVLoad the csv file.This class use boost::spirit to implement the parser, please refer to following link http://theboostcpplibraries.com/boost.spirit for quick review
oCMaxAbsScalerA simple MaxAbs Scaler class
oCMeanImputation< T >A simple mean imputation class
oCMeanNormalizationA simple Mean Normalization class
oCMedianImputation< T >This is a class implementation of simple median imputation
oCMinMaxScalerA simple MinMax Scaler class
oCMissingPolicyMissingPolicy is used as a helper class for DatasetMapper
oCPCAWhiteningA simple PCAWhitening class
oCScalingModelThe model to save to disk
oCSplitByAnyOfTokenizes a string using a set of delimiters
oCStandardScalerA simple Standard Scaler class
oCStringEncoding< EncodingPolicyType, DictionaryType >The class translates a set of strings into numbers using various encoding algorithms
oCStringEncodingDictionary< Token >This class provides a dictionary interface for the purpose of string encoding
oCStringEncodingDictionary< boost::string_view >
oCStringEncodingDictionary< int >
oCStringEncodingPolicyTraits< PolicyType >This is a template struct that provides some information about various encoding policies
oCStringEncodingPolicyTraits< DictionaryEncodingPolicy >The specialization provides some information about the dictionary encoding policy
oCTfIdfEncodingPolicyDefinition of the TfIdfEncodingPolicy class
oCZCAWhiteningA simple ZCAWhitening class
oCDBSCAN< RangeSearchType, PointSelectionPolicy >DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a clustering technique described in the following paper:
oCOrderedPointSelectionThis class can be used to sequentially select the next point to use for DBSCAN
oCRandomPointSelectionThis class can be used to randomly select the next point to use for DBSCAN
oCDecisionStump< MatType >This class implements a decision stump
oCDTree< MatType, TagType >A density estimation tree is similar to both a decision tree and a space partitioning tree (like a kd-tree)
oCPathCacherThis class is responsible for caching the path to each node of the tree
oCDiagonalGaussianDistributionA single multivariate Gaussian distribution with diagonal covariance
oCDiscreteDistributionA discrete distribution where the only observations are discrete observations
oCGammaDistributionThis class represents the Gamma distribution
oCGaussianDistributionA single multivariate Gaussian distribution
oCLaplaceDistributionThe multivariate Laplace distribution centered at 0 has pdf
oCRegressionDistributionA class that represents a univariate conditionally Gaussian distribution
oCDTBRules< MetricType, TreeType >
oCDTBStatA statistic for use with mlpack trees, which stores the upper bound on distance to nearest neighbors and the component which this node belongs to
oCDualTreeBoruvka< MetricType, MatType, TreeType >Performs the MST calculation using the Dual-Tree Boruvka algorithm, using any type of tree
oCEdgePairAn edge pair is simply two indices and a distance
oCUnionFindA Union-Find data structure
oCFastMKS< KernelType, MatType, TreeType >An implementation of fast exact max-kernel search
oCFastMKSModelA utility struct to contain all the possible FastMKS models, for use by the mlpack_fastmks program
oCFastMKSRules< KernelType, TreeType >The FastMKSRules class is a template helper class used by FastMKS class when performing exact max-kernel search
oCFastMKSStatThe statistic used in trees with FastMKS
oCDiagonalConstraintForce a covariance matrix to be diagonal
oCDiagonalGMMA Diagonal Gaussian Mixture Model
oCEigenvalueRatioConstraintGiven a vector of eigenvalue ratios, ensure that the covariance matrix always has those eigenvalue ratios
oCEMFit< InitialClusteringType, CovarianceConstraintPolicy, Distribution >This class contains methods which can fit a GMM to observations using the EM algorithm
oCGMMA Gaussian Mixture Model (GMM)
oCNoConstraintThis class enforces no constraint on the covariance matrix
oCPositiveDefiniteConstraintGiven a covariance matrix, force the matrix to be positive definite
oCHMM< Distribution >A class that represents a Hidden Markov Model with an arbitrary type of emission distribution
oCHMMModelA serializable HMM model that also stores the type
oCCVFunction< CVType, MLAlgorithm, TotalArgs, BoundArgs >This wrapper serves for adapting the interface of the cross-validation classes to the one that can be utilized by the mlpack optimizers
oCDeduceHyperParameterTypes< Args >A type function for deducing types of hyper-parameters from types of arguments in the Optimize method in HyperParameterTuner
oCDeduceHyperParameterTypes< Args >::ResultHolder< HPTypes >
oCDeduceHyperParameterTypes< PreFixedArg< T >, Args...>Defining DeduceHyperParameterTypes for the case when not all argument types have been processed, and the next one is the type of an argument that should be fixed
oCDeduceHyperParameterTypes< PreFixedArg< T >, Args...>::ResultHolder< HPTypes >
oCDeduceHyperParameterTypes< T, Args...>Defining DeduceHyperParameterTypes for the case when not all argument types have been processed, and the next one (T) is a collection type or an arithmetic type
oCDeduceHyperParameterTypes< T, Args...>::IsCollectionType< Type >A type function to check whether Type is a collection type (for that it should define value_type)
oCDeduceHyperParameterTypes< T, Args...>::ResultHolder< HPTypes >
oCDeduceHyperParameterTypes< T, Args...>::ResultHPType< ArgumentType, IsArithmetic >A type function to deduce the result hyper-parameter type for ArgumentType
oCDeduceHyperParameterTypes< T, Args...>::ResultHPType< ArithmeticType, true >
oCDeduceHyperParameterTypes< T, Args...>::ResultHPType< CollectionType, false >
oCFixedArg< T, I >A struct for storing information about a fixed argument
oCHyperParameterTuner< MLAlgorithm, Metric, CV, OptimizerType, MatType, PredictionsType, WeightsType >The class HyperParameterTuner for the given MLAlgorithm utilizes the provided Optimizer to find the values of hyper-parameters that optimize the value of the given Metric
oCIsPreFixedArg< T >A type function for checking whether the given type is PreFixedArg
oCPreFixedArg< typename >A struct for marking arguments as ones that should be fixed (it can be useful for the Optimize method of HyperParameterTuner)
oCPreFixedArg< T & >The specialization of the template for references
oCIOParses the command line for parameters and holds user-specified parameters
oCKDE< KernelType, MetricType, MatType, TreeType, DualTreeTraversalType, SingleTreeTraversalType >The KDE class is a template class for performing Kernel Density Estimations
oCKDECleanRules< TreeType >A dual-tree traversal Rules class for cleaning used trees before performing kernel density estimation
oCKDEDefaultParamsKDEDefaultParams contains the default input parameter values for KDE
oCKDEModel
oCKDERules< MetricType, KernelType, TreeType >A dual-tree traversal Rules class for kernel density estimation
oCKDEStatExtra data for each node in the tree for the task of kernel density estimation
oCKernelNormalizerKernelNormalizer holds a set of methods to normalize estimations applying in each case the appropiate kernel normalizer function
oCCauchyKernelThe Cauchy kernel
oCCosineDistanceThe cosine distance (or cosine similarity)
oCEpanechnikovKernelThe Epanechnikov kernel, defined as
oCExampleKernelAn example kernel function
oCGaussianKernelThe standard Gaussian kernel
oCHyperbolicTangentKernelHyperbolic tangent kernel
oCKernelTraits< KernelType >This is a template class that can provide information about various kernels
oCKernelTraits< CauchyKernel >Kernel traits for the Cauchy kernel
oCKernelTraits< CosineDistance >Kernel traits for the cosine distance
oCKernelTraits< EpanechnikovKernel >Kernel traits for the Epanechnikov kernel
oCKernelTraits< GaussianKernel >Kernel traits for the Gaussian kernel
oCKernelTraits< LaplacianKernel >Kernel traits of the Laplacian kernel
oCKernelTraits< SphericalKernel >Kernel traits for the spherical kernel
oCKernelTraits< TriangularKernel >Kernel traits for the triangular kernel
oCKMeansSelection< ClusteringType, maxIterations >Implementation of the kmeans sampling scheme
oCLaplacianKernelThe standard Laplacian kernel
oCLinearKernelThe simple linear kernel (dot product)
oCNystroemMethod< KernelType, PointSelectionPolicy >
oCOrderedSelection
oCPolynomialKernelThe simple polynomial kernel
oCPSpectrumStringKernelThe p-spectrum string kernel
oCRandomSelection
oCSphericalKernelThe spherical kernel, which is 1 when the distance between the two argument points is less than or equal to the bandwidth, or 0 otherwise
oCTriangularKernelThe trivially simple triangular kernel, defined by
oCAllowEmptyClustersPolicy which allows K-Means to create empty clusters without any error being reported
oCDualTreeKMeans< MetricType, MatType, TreeType >An algorithm for an exact Lloyd iteration which simply uses dual-tree nearest-neighbor search to find the nearest centroid for each point in the dataset
oCDualTreeKMeansRules< MetricType, TreeType >
oCElkanKMeans< MetricType, MatType >
oCHamerlyKMeans< MetricType, MatType >
oCKillEmptyClustersPolicy which allows K-Means to "kill" empty clusters without any error being reported
oCKMeans< MetricType, InitialPartitionPolicy, EmptyClusterPolicy, LloydStepType, MatType >This class implements K-Means clustering, using a variety of possible implementations of Lloyd's algorithm
oCMaxVarianceNewClusterWhen an empty cluster is detected, this class takes the point furthest from the centroid of the cluster with maximum variance as a new cluster
oCNaiveKMeans< MetricType, MatType >This is an implementation of a single iteration of Lloyd's algorithm for k-means
oCPellegMooreKMeans< MetricType, MatType >An implementation of Pelleg-Moore's 'blacklist' algorithm for k-means clustering
oCPellegMooreKMeansRules< MetricType, TreeType >The rules class for the single-tree Pelleg-Moore kd-tree traversal for k-means clustering
oCPellegMooreKMeansStatisticA statistic for trees which holds the blacklist for Pelleg-Moore k-means clustering (which represents the clusters that cannot possibly own any points in a node)
oCRandomPartitionA very simple partitioner which partitions the data randomly into the number of desired clusters
oCRefinedStartA refined approach for choosing initial points for k-means clustering
oCSampleInitialization
oCKernelPCA< KernelType, KernelRule >This class performs kernel principal components analysis (Kernel PCA), for a given kernel
oCNaiveKernelRule< KernelType >
oCNystroemKernelRule< KernelType, PointSelectionPolicy >
oCLocalCoordinateCodingAn implementation of Local Coordinate Coding (LCC) that codes data which approximately lives on a manifold using a variation of l1-norm regularized sparse coding; in LCC, the penalty on the absolute value of each point's coefficient for each atom is weighted by the squared distance of that point to that atom
oCConstraints< MetricType >Interface for generating distance based constraints on a given dataset, provided corresponding true labels and a quantity parameter (k) are specified
oCLMNN< MetricType, OptimizerType >An implementation of Large Margin nearest neighbor metric learning technique
oCLMNNFunction< MetricType >The Large Margin Nearest Neighbors function
oCLogProvides a convenient way to give formatted output
oCColumnsToBlocksTransform the columns of the given matrix into a block format
oCRangeType< T >Simple real-valued range
oCMatrixCompletionThis class implements the popular nuclear norm minimization heuristic for matrix completion problems
oCMeanShift< UseKernel, KernelType, MatType >This class implements mean shift clustering
oCBLEU< ElemType, PrecisionType >BLEU, or the Bilingual Evaluation Understudy, is an algorithm for evaluating the quality of text which has been machine translated from one natural language to another
oCIoU< UseCoordinates >Definition of Intersection over Union metric
oCIPMetric< KernelType >The inner product metric, IPMetric, takes a given Mercer kernel (KernelType), and when Evaluate() is called, returns the distance between the two points in kernel space:
oCLMetric< TPower, TTakeRoot >The L_p metric for arbitrary integer p, with an option to take the root
oCMahalanobisDistance< TakeRoot >The Mahalanobis distance, which is essentially a stretched Euclidean distance
oCNMS< UseCoordinates >Definition of Non Maximal Supression
oCMVUMeant to provide a good abstraction for users
oCNaiveBayesClassifier< ModelMatType >The simple Naive Bayes classifier
oCNCA< MetricType, OptimizerType >An implementation of Neighborhood Components Analysis, both a linear dimensionality reduction technique and a distance learning technique
oCSoftmaxErrorFunction< MetricType >The "softmax" stochastic neighbor assignment probability function
oCDrusillaSelect< MatType >
oCFurthestNSThis class implements the necessary methods for the SortPolicy template parameter of the NeighborSearch class
oCLSHSearch< SortPolicy, MatType >The LSHSearch class; this class builds a hash on the reference set and uses this hash to compute the distance-approximate nearest-neighbors of the given queries
oCNearestNSThis class implements the necessary methods for the SortPolicy template parameter of the NeighborSearch class
oCNeighborSearch< SortPolicy, MetricType, MatType, TreeType, DualTreeTraversalType, SingleTreeTraversalType >The NeighborSearch class is a template class for performing distance-based neighbor searches
oCNeighborSearchRules< SortPolicy, MetricType, TreeType >The NeighborSearchRules class is a template helper class used by NeighborSearch class when performing distance-based neighbor searches
oCNeighborSearchRules< SortPolicy, MetricType, TreeType >::CandidateCmpCompare two candidates based on the distance
oCNeighborSearchStat< SortPolicy >Extra data for each node in the tree
oCNSModel< SortPolicy >The NSModel class provides an easy way to serialize a model, abstracts away the different types of trees, and also reflects the NeighborSearch API
oCQDAFN< MatType >
oCRAModel< SortPolicy >The RAModel class provides an abstraction for the RASearch class, abstracting away the TreeType parameter and allowing it to be specified at runtime in this class
oCRAQueryStat< SortPolicy >Extra data for each node in the tree
oCRASearch< SortPolicy, MetricType, MatType, TreeType >The RASearch class: This class provides a generic manner to perform rank-approximate search via random-sampling
oCRASearchRules< SortPolicy, MetricType, TreeType >The RASearchRules class is a template helper class used by RASearch class when performing rank-approximate search via random-sampling
oCRAUtil
oCSparseAutoencoderA sparse autoencoder is a neural network whose aim to learn compressed representations of the data, typically for dimensionality reduction, with a constraint on the activity of the neurons in the network
oCSparseAutoencoderFunctionThis is a class for the sparse autoencoder objective function
oCExactSVDPolicyImplementation of the exact SVD policy
oCPCA< DecompositionPolicy >This class implements principal components analysis (PCA)
oCQUICSVDPolicyImplementation of the QUIC-SVD policy
oCRandomizedBlockKrylovSVDPolicyImplementation of the randomized block krylov SVD policy
oCRandomizedSVDPolicyImplementation of the randomized SVD policy
oCPerceptron< LearnPolicy, WeightInitializationPolicy, MatType >This class implements a simple perceptron (i.e., a single layer neural network)
oCRandomInitializationThis class is used to initialize weights for the weightVectors matrix in a random manner
oCSimpleWeightUpdate
oCZeroInitializationThis class is used to initialize the matrix weightVectors to zero
oCRadicalAn implementation of RADICAL, an algorithm for independent component analysis (ICA)
oCRangeSearch< MetricType, MatType, TreeType >The RangeSearch class is a template class for performing range searches
oCRangeSearchRules< MetricType, TreeType >The RangeSearchRules class is a template helper class used by RangeSearch class when performing range searches
oCRangeSearchStatStatistic class for RangeSearch, to be set to the StatisticType of the tree type that range search is being performed with
oCRSModel
oCBayesianLinearRegressionA Bayesian approach to the maximum likelihood estimation of the parameters $ \omega $ of the linear regression model
oCLARSAn implementation of LARS, a stage-wise homotopy-based algorithm for l1-regularized linear regression (LASSO) and l1+l2 regularized linear regression (Elastic Net)
oCLinearRegressionA simple linear regression algorithm using ordinary least squares
oCLogisticRegression< MatType >The LogisticRegression class implements an L2-regularized logistic regression model, and supports training with multiple optimizers and classification
oCLogisticRegressionFunction< MatType >The log-likelihood function for the logistic regression objective function
oCSoftmaxRegressionSoftmax Regression is a classifier which can be used for classification when the data available can take two or more class values
oCSoftmaxRegressionFunction
oCAcrobotImplementation of Acrobot game
oCAcrobot::Action
oCAcrobot::State
oCAggregatedPolicy< PolicyType >
oCAsyncLearning< WorkerType, EnvironmentType, NetworkType, UpdaterType, PolicyType >Wrapper of various asynchronous learning algorithms, e.g
oCCartPoleImplementation of Cart Pole task
oCCartPole::ActionImplementation of action of Cart Pole
oCCartPole::StateImplementation of the state of Cart Pole
oCCategoricalDQN< OutputLayerType, InitType, NetworkType >Implementation of the Categorical Deep Q-Learning network
oCContinuousActionEnvTo use the dummy environment, one may start by specifying the state and action dimensions
oCContinuousActionEnv::ActionImplementation of continuous action
oCContinuousActionEnv::StateImplementation of state of the dummy environment
oCContinuousDoublePoleCartImplementation of Continuous Double Pole Cart Balancing task
oCContinuousDoublePoleCart::ActionImplementation of action of Continuous Double Pole Cart
oCContinuousDoublePoleCart::StateImplementation of the state of Continuous Double Pole Cart
oCContinuousMountainCarImplementation of Continuous Mountain Car task
oCContinuousMountainCar::ActionImplementation of action of Continuous Mountain Car
oCContinuousMountainCar::StateImplementation of state of Continuous Mountain Car
oCDiscreteActionEnvTo use the dummy environment, one may start by specifying the state and action dimensions
oCDiscreteActionEnv::ActionImplementation of discrete action
oCDiscreteActionEnv::StateImplementation of state of the dummy environment
oCDoublePoleCartImplementation of Double Pole Cart Balancing task
oCDoublePoleCart::ActionImplementation of action of Double Pole Cart
oCDoublePoleCart::StateImplementation of the state of Double Pole Cart
oCDuelingDQN< OutputLayerType, InitType, CompleteNetworkType, FeatureNetworkType, AdvantageNetworkType, ValueNetworkType >Implementation of the Dueling Deep Q-Learning network
oCGreedyPolicy< EnvironmentType >Implementation for epsilon greedy policy
oCMountainCarImplementation of Mountain Car task
oCMountainCar::ActionImplementation of action of Mountain Car
oCMountainCar::StateImplementation of state of Mountain Car
oCNStepQLearningWorker< EnvironmentType, NetworkType, UpdaterType, PolicyType >Forward declaration of NStepQLearningWorker
oCOneStepQLearningWorker< EnvironmentType, NetworkType, UpdaterType, PolicyType >Forward declaration of OneStepQLearningWorker
oCOneStepSarsaWorker< EnvironmentType, NetworkType, UpdaterType, PolicyType >Forward declaration of OneStepSarsaWorker
oCPendulumImplementation of Pendulum task
oCPendulum::ActionImplementation of action of Pendulum
oCPendulum::StateImplementation of state of Pendulum
oCPrioritizedReplay< EnvironmentType >Implementation of prioritized experience replay
oCPrioritizedReplay< EnvironmentType >::Transition
oCQLearning< EnvironmentType, NetworkType, UpdaterType, PolicyType, ReplayType >Implementation of various Q-Learning algorithms, such as DQN, double DQN
oCRandomReplay< EnvironmentType >Implementation of random experience replay
oCRandomReplay< EnvironmentType >::Transition
oCRewardClipping< EnvironmentType >Interface for clipping the reward to some value between the specified maximum and minimum value (Clipping here is implemented as $ g_{\text{clipped}} = \max(g_{\text{min}}, \min(g_{\text{min}}, g))) $.)
oCSAC< EnvironmentType, QNetworkType, PolicyNetworkType, UpdaterType, ReplayType >Implementation of Soft Actor-Critic, a model-free off-policy actor-critic based deep reinforcement learning algorithm
oCSimpleDQN< OutputLayerType, InitType, NetworkType >
oCSumTree< T >Implementation of SumTree
oCTrainingConfig
oCMethodFormDetector< Class, MethodForm, AdditionalArgsCount >
oCMethodFormDetector< Class, MethodForm, 0 >
oCMethodFormDetector< Class, MethodForm, 1 >
oCMethodFormDetector< Class, MethodForm, 2 >
oCMethodFormDetector< Class, MethodForm, 3 >
oCMethodFormDetector< Class, MethodForm, 4 >
oCMethodFormDetector< Class, MethodForm, 5 >
oCMethodFormDetector< Class, MethodForm, 6 >
oCMethodFormDetector< Class, MethodForm, 7 >
oCDataDependentRandomInitializerA data-dependent random dictionary initializer for SparseCoding
oCNothingInitializerA DictionaryInitializer for SparseCoding which does not initialize anything; it is useful for when the dictionary is already known and will be set with SparseCoding::Dictionary()
oCRandomInitializerA DictionaryInitializer for use with the SparseCoding class
oCSparseCodingAn implementation of Sparse Coding with Dictionary Learning that achieves sparsity via an l1-norm regularizer on the codes (LASSO) or an (l1+l2)-norm regularizer on the codes (the Elastic Net)
oCBiasSVD< OptimizerType >Bias SVD is an improvement on Regularized SVD which is a matrix factorization techniques
oCBiasSVDFunction< MatType >This class contains methods which are used to calculate the cost of BiasSVD's objective function, to calculate gradient of parameters with respect to the objective function, etc
oCQUIC_SVDQUIC-SVD is a matrix factorization technique, which operates in a subspace such that A's approximation in that subspace has minimum error(A being the data matrix)
oCRandomizedBlockKrylovSVDRandomized block krylov SVD is a matrix factorization that is based on randomized matrix approximation techniques, developed in in "Randomized Block Krylov Methods for Stronger and Faster Approximate Singular Value Decomposition"
oCRandomizedSVDRandomized SVD is a matrix factorization that is based on randomized matrix approximation techniques, developed in in "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions"
oCRegularizedSVD< OptimizerType >Regularized SVD is a matrix factorization technique that seeks to reduce the error on the training set, that is on the examples for which the ratings have been provided by the users
oCRegularizedSVDFunction< MatType >The data is stored in a matrix of type MatType, so that this class can be used with both dense and sparse matrix types
oCSVDPlusPlus< OptimizerType >SVD++ is a matrix decomposition tenique used in collaborative filtering
oCSVDPlusPlusFunction< MatType >This class contains methods which are used to calculate the cost of SVD++'s objective function, to calculate gradient of parameters with respect to the objective function, etc
oCLinearSVM< MatType >The LinearSVM class implements an L2-regularized support vector machine model, and supports training with multiple optimizers and classification
oCLinearSVMFunction< MatType >The hinge loss function for the linear SVM objective function
oCTimerThe timer class provides a way for mlpack methods to be timed
oCTimers
oCAllCategoricalSplit< FitnessFunction >The AllCategoricalSplit is a splitting function that will split categorical features into many children: one child for each category
oCAllCategoricalSplit< FitnessFunction >::AuxiliarySplitInfo< ElemType >
oCAllDimensionSelectThis dimension selection policy allows any dimension to be selected for splitting
oCAxisParallelProjVectorAxisParallelProjVector defines an axis-parallel projection vector
oCBestBinaryNumericSplit< FitnessFunction >The BestBinaryNumericSplit is a splitting function for decision trees that will exhaustively search a numeric dimension for the best binary split
oCBestBinaryNumericSplit< FitnessFunction >::AuxiliarySplitInfo< ElemType >
oCBinaryNumericSplit< FitnessFunction, ObservationType >The BinaryNumericSplit class implements the numeric feature splitting strategy devised by Gama, Rocha, and Medas in the following paper:
oCBinaryNumericSplitInfo< ObservationType >
oCBinarySpaceTree< MetricType, StatisticType, MatType, BoundType, SplitType >A binary space partitioning tree, such as a KD-tree or a ball tree
oCBinarySpaceTree< MetricType, StatisticType, MatType, BoundType, SplitType >::BreadthFirstDualTreeTraverser< RuleType >
oCBinarySpaceTree< MetricType, StatisticType, MatType, BoundType, SplitType >::DualTreeTraverser< RuleType >A dual-tree traverser for binary space trees; see dual_tree_traverser.hpp
oCBinarySpaceTree< MetricType, StatisticType, MatType, BoundType, SplitType >::SingleTreeTraverser< RuleType >A single-tree traverser for binary space trees; see single_tree_traverser.hpp for implementation
oCCategoricalSplitInfo
oCCompareCosineNode
oCCosineTree
oCCoverTree< MetricType, StatisticType, MatType, RootPointPolicy >A cover tree is a tree specifically designed to speed up nearest-neighbor computation in high-dimensional spaces
oCCoverTree< MetricType, StatisticType, MatType, RootPointPolicy >::DualTreeTraverser< RuleType >A dual-tree cover tree traverser; see dual_tree_traverser.hpp
oCCoverTree< MetricType, StatisticType, MatType, RootPointPolicy >::SingleTreeTraverser< RuleType >A single-tree cover tree traverser; see single_tree_traverser.hpp for implementation
oCDiscreteHilbertValue< TreeElemType >The DiscreteHilbertValue class stores Hilbert values for all of the points in a RectangleTree node, and calculates Hilbert values for new points
oCEmptyStatisticEmpty statistic if you are not interested in storing statistics in your tree
oCExampleTree< MetricType, StatisticType, MatType >This is not an actual space tree but instead an example tree that exists to show and document all the functions that mlpack trees must implement
oCFirstPointIsRootThis class is meant to be used as a choice for the policy class RootPointPolicy of the CoverTree class
oCGiniGainThe Gini gain, a measure of set purity usable as a fitness function (FitnessFunction) for decision trees
oCGiniImpurity
oCGreedySingleTreeTraverser< TreeType, RuleType >
oCHilbertRTreeAuxiliaryInformation< TreeType, HilbertValueType >
oCHilbertRTreeDescentHeuristicThis class chooses the best child of a node in a Hilbert R tree when inserting a new point
oCHilbertRTreeSplit< splitOrder >The splitting procedure for the Hilbert R tree
oCHoeffdingCategoricalSplit< FitnessFunction >This is the standard Hoeffding-bound categorical feature proposed in the paper below:
oCHoeffdingInformationGain
oCHoeffdingNumericSplit< FitnessFunction, ObservationType >The HoeffdingNumericSplit class implements the numeric feature splitting strategy alluded to by Domingos and Hulten in the following paper:
oCHoeffdingTree< FitnessFunction, NumericSplitType, CategoricalSplitType >The HoeffdingTree object represents all of the necessary information for a Hoeffding-bound-based decision tree
oCHoeffdingTreeModelThis class is a serializable Hoeffding tree model that can hold four different types of Hoeffding trees
oCHyperplaneBase< BoundT, ProjVectorT >HyperplaneBase defines a splitting hyperplane based on a projection vector and projection value
oCInformationGainThe standard information gain criterion, used for calculating gain in decision trees
oCIsSpillTree< TreeType >
oCIsSpillTree< tree::SpillTree< MetricType, StatisticType, MatType, HyperplaneType, SplitType > >
oCMeanSpaceSplit< MetricType, MatType >
oCMeanSplit< BoundType, MatType >A binary space partitioning tree node is split into its left and right child
oCMeanSplit< BoundType, MatType >::SplitInfoAn information about the partition
oCMidpointSpaceSplit< MetricType, MatType >
oCMidpointSplit< BoundType, MatType >A binary space partitioning tree node is split into its left and right child
oCMidpointSplit< BoundType, MatType >::SplitInfoA struct that contains an information about the split
oCMinimalCoverageSweep< SplitPolicy >The MinimalCoverageSweep class finds a partition along which we can split a node according to the coverage of two resulting nodes
oCMinimalCoverageSweep< SplitPolicy >::SweepCost< TreeType >A struct that provides the type of the sweep cost
oCMinimalSplitsNumberSweep< SplitPolicy >The MinimalSplitsNumberSweep class finds a partition along which we can split a node according to the number of required splits of the node
oCMinimalSplitsNumberSweep< SplitPolicy >::SweepCost< typename >A struct that provides the type of the sweep cost
oCMultipleRandomDimensionSelectThis dimension selection policy allows the selection from a few random dimensions
oCNoAuxiliaryInformation< TreeType >
oCNumericSplitInfo< ObservationType >
oCOctree< MetricType, StatisticType, MatType >
oCOctree< MetricType, StatisticType, MatType >::DualTreeTraverser< MetricType, StatisticType, MatType >A dual-tree traverser; see dual_tree_traverser.hpp
oCOctree< MetricType, StatisticType, MatType >::SingleTreeTraverser< RuleType >A single-tree traverser; see single_tree_traverser.hpp
oCOctree< MetricType, StatisticType, MatType >::SplitType::SplitInfo
oCProjVectorProjVector defines a general projection vector (not necessarily axis-parallel)
oCQueueFrame< TreeType, TraversalInfoType >
oCRandomDimensionSelectThis dimension selection policy only selects one single random dimension
oCRandomForest< FitnessFunction, DimensionSelectionType, NumericSplitType, CategoricalSplitType, ElemType >
oCRectangleTree< MetricType, StatisticType, MatType, SplitType, DescentType, AuxiliaryInformationType >A rectangle type tree tree, such as an R-tree or X-tree
oCRectangleTree< MetricType, StatisticType, MatType, SplitType, DescentType, AuxiliaryInformationType >::DualTreeTraverser< MetricType, StatisticType, MatType, SplitType, DescentType, AuxiliaryInformationType >A dual tree traverser for rectangle type trees
oCRectangleTree< MetricType, StatisticType, MatType, SplitType, DescentType, AuxiliaryInformationType >::SingleTreeTraverser< RuleType >A single traverser for rectangle type trees
oCRPlusPlusTreeAuxiliaryInformation< TreeType >
oCRPlusPlusTreeDescentHeuristic
oCRPlusPlusTreeSplitPolicyThe RPlusPlusTreeSplitPolicy helps to determine the subtree into which we should insert a child of an intermediate node that is being split
oCRPlusTreeDescentHeuristic
oCRPlusTreeSplit< SplitPolicyType, SweepType >The RPlusTreeSplit class performs the split process of a node on overflow
oCRPlusTreeSplitPolicyThe RPlusPlusTreeSplitPolicy helps to determine the subtree into which we should insert a child of an intermediate node that is being split
oCRPTreeMaxSplit< BoundType, MatType >This class splits a node by a random hyperplane
oCRPTreeMaxSplit< BoundType, MatType >::SplitInfoAn information about the partition
oCRPTreeMeanSplit< BoundType, MatType >This class splits a binary space tree
oCRPTreeMeanSplit< BoundType, MatType >::SplitInfoAn information about the partition
oCRStarTreeDescentHeuristicWhen descending a RectangleTree to insert a point, we need to have a way to choose a child node when the point isn't enclosed by any of them
oCRStarTreeSplitA Rectangle Tree has new points inserted at the bottom
oCRTreeDescentHeuristicWhen descending a RectangleTree to insert a point, we need to have a way to choose a child node when the point isn't enclosed by any of them
oCRTreeSplitA Rectangle Tree has new points inserted at the bottom
oCSpaceSplit< MetricType, MatType >
oCSpillTree< MetricType, StatisticType, MatType, HyperplaneType, SplitType >A hybrid spill tree is a variant of binary space trees in which the children of a node can "spill over" each other, and contain shared datapoints
oCSpillTree< MetricType, StatisticType, MatType, HyperplaneType, SplitType >::SpillDualTreeTraverser< MetricType, StatisticType, MatType, HyperplaneType, SplitType >A generic dual-tree traverser for hybrid spill trees; see spill_dual_tree_traverser.hpp for implementation
oCSpillTree< MetricType, StatisticType, MatType, HyperplaneType, SplitType >::SpillSingleTreeTraverser< MetricType, StatisticType, MatType, HyperplaneType, SplitType >A generic single-tree traverser for hybrid spill trees; see spill_single_tree_traverser.hpp for implementation
oCTraversalInfo< TreeType >The TraversalInfo class holds traversal information which is used in dual-tree (and single-tree) traversals
oCTreeTraits< TreeType >The TreeTraits class provides compile-time information on the characteristics of a given tree type
oCTreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, bound::BallBound, SplitType > >This is a specialization of the TreeType class to the BallTree tree type
oCTreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, bound::CellBound, SplitType > >This is a specialization of the TreeType class to the UBTree tree type
oCTreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, bound::HollowBallBound, SplitType > >This is a specialization of the TreeType class to an arbitrary tree with HollowBallBound (currently only the vantage point tree is supported)
oCTreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, BoundType, RPTreeMaxSplit > >This is a specialization of the TreeType class to the max-split random projection tree
oCTreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, BoundType, RPTreeMeanSplit > >This is a specialization of the TreeType class to the mean-split random projection tree
oCTreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, BoundType, SplitType > >This is a specialization of the TreeTraits class to the BinarySpaceTree tree type
oCTreeTraits< CoverTree< MetricType, StatisticType, MatType, RootPointPolicy > >The specialization of the TreeTraits class for the CoverTree tree type
oCTreeTraits< Octree< MetricType, StatisticType, MatType > >This is a specialization of the TreeTraits class to the Octree tree type
oCTreeTraits< RectangleTree< MetricType, StatisticType, MatType, RPlusTreeSplit< SplitPolicyType, SweepType >, DescentType, AuxiliaryInformationType > >Since the R+/R++ tree can not have overlapping children, we should define traits for the R+/R++ tree
oCTreeTraits< RectangleTree< MetricType, StatisticType, MatType, SplitType, DescentType, AuxiliaryInformationType > >This is a specialization of the TreeType class to the RectangleTree tree type
oCTreeTraits< SpillTree< MetricType, StatisticType, MatType, HyperplaneType, SplitType > >This is a specialization of the TreeType class to the SpillTree tree type
oCUBTreeSplit< BoundType, MatType >Split a node into two parts according to the median address of points contained in the node
oCVantagePointSplit< BoundType, MatType, MaxNumSamples >The class splits a binary space partitioning tree node according to the median distance to the vantage point
oCVantagePointSplit< BoundType, MatType, MaxNumSamples >::SplitInfoA struct that contains an information about the split
oCXTreeAuxiliaryInformation< TreeType >The XTreeAuxiliaryInformation class provides information specific to X trees for each node in a RectangleTree
oCXTreeAuxiliaryInformation< TreeType >::SplitHistoryStructThe X tree requires that the tree records it's "split history"
oCXTreeSplitA Rectangle Tree has new points inserted at the bottom
oCBindingDetailsThis structure holds all of the information about bindings documentation
oCExample
oCIsStdVector< T >Metaprogramming structure for vector detection
oCIsStdVector< std::vector< T, A > >Metaprogramming structure for vector detection
oCLongDescription
oCNullOutStreamUsed for Log::Debug when not compiled with debugging symbols
oCParamDataThis structure holds all of the information about a single parameter, including its value (which is set when ParseCommandLine() is called)
oCPrefixedOutStreamAllows us to output to an ostream with a prefix at the beginning of each line, in the same way we would output to cout or cerr
oCProgramName
oCSeeAlso
oCShortDescription
oCNeighborSearchStat< neighbor::NearestNeighborSort >
oCtemplateAuxiliarySplitInfo< ElemType >
oCRangeType< double >
oCRangeType< ElemType >
oCRNN< OutputLayerType, InitializationRuleType, CustomLayers...>
oCtrue_type
oCSumTree< double >
oCTrainFormBase4< PT, void, const MT &, const PT & >
oCTrainFormBase5< PT, void, const MT &, const data::DatasetInfo &, const PT & >
oCTrainFormBase5< PT, void, const MT &, const PT &, const size_t >
oCTrainFormBase5< PT, WT, const MT &, const PT &, const WT & >
oCTrainFormBase6< PT, void, const MT &, const data::DatasetInfo &, const PT &, const size_t >
oCTrainFormBase6< PT, WT, const MT &, const data::DatasetInfo &, const PT &, const WT & >
oCTrainFormBase6< PT, WT, const MT &, const PT &, const size_t, const WT & >
oCTrainFormBase7< PT, WT, const MT &, const data::DatasetInfo &, const PT &, const size_t, const WT & >
\CTrainHMMModel