Zettelkasten/Terminology Information

Regression tree

Computer-Nerd 2023. 3. 11.

Information

  • A regression tree is a decision tree used in regression problems, i.e., where the goal is to predict a continuous target variable.
  • It is built using a recursive partitioning algorithm that splits the data into homogeneous subsets based on the values of the predictor variables.
  • The algorithm finds the best split at each node using a criterion such as the sum of squared errors, variance reduction, or the coefficient of determination (R-squared).
  • The result is a tree where each leaf node represents a prediction of the target variable based on the values of the predictor variables in that subset.
  • The model is easy to interpret, and it can capture nonlinear relationships between the predictor variables and the target variable.
  • However, it can suffer from overfitting, where the model fits the training data too closely and performs poorly on new, unseen data. Regularization techniques such as pruning or setting a minimum number of samples per leaf can help prevent overfitting.

'Zettelkasten > Terminology Information' 카테고리의 다른 글

CART (Classification And Regression Tree)  (0) 2023.03.12
DL (Deep Learning)  (0) 2023.03.11
NN (Neural Network)  (0) 2023.03.10
PV (PhotoVoltaic)  (0) 2023.03.10
GA (Genetic Algorithm)  (0) 2023.03.09

댓글