- class aydin.regression.lgbm.LGBMRegressor(num_leaves: Optional[int] = None, max_num_estimators: Optional[int] = None, max_bin: int = 512, learning_rate: Optional[float] = None, loss: str = 'l1', patience: int = 5, verbosity: int = - 1, compute_load: float = 0.95, inference_mode: Optional[str] = None, compute_training_loss: bool = False)
The LightGBM Regressor uses the gradient boosting library <a href=”https://github.com/microsoft/LightGBM”>LightGBM</a> to perform regression from a set of feature vectors and target values. LightGBM is a solid library but we do yet support GPU training and inference. Because of lack of GPU support LightGBM is slower than CatBoost, sometimes LightGBM gives better results than Catbboost, but not often enough to justify the loss of speed.
- fit(x_train, y_train, x_valid=None, y_valid=None, regressor_callback=None)
Fits function y=f(x) given training pairs (x_train, y_train). Stops when performance stops improving on the test dataset: (x_test, y_test). The target y_train can have multiple ‘channels’. This will cause multiple regressors to be instanciated internally to be able to predict these channels from the input features.
x training values
y training values
x validation values
y validation values
- static load(path: str)
Returns an ‘all-batteries-included’ regressor from a given path (folder).
path to load from.
- predict(x, models_to_use=None)
Predicts y given x by applying the learned function f: y=f(x) If the regressor is trained on multiple ouput channels, this will return the corresponding number of channels…
inferred y values
- recommended_max_num_datapoints() int
Recommended maximum number of datapoints
- save(path: str)
Saves an ‘all-batteries-included’ regressor at a given path (folder).
path to save to
- frozenEncoded JSON object
Stops training (can be called by another thread)